Why opinion on AI is so divided
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. In an industry that doesn’t stand still, Stanford’s AI Index, an annual roundup of key results and trends, is a chance to take a breath. (It’s a marathon, not a sprint, after…
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here . In an industry that doesn’t stand still, Stanford’s AI Index, an annual roundup of key results and trends , is a chance to take a breath. (It’s a marathon, not a sprint , after all.) This year’s report , which dropped today, is full of striking stats. A lot of the value comes from having numbers to back up gut feelings you might already have, such as the sense that the US is gunning harder for AI than everyone else: It hosts 5,427 data centers (and counting). That’s more than 10 times as many as any other country. There’s also a reminder that the hardware supply chain the AI industry relies on has some major choke points. Here’s perhaps the most remarkable fact: “A single company, TSMC, fabricates almost every leading AI chip, making the global AI hardware supply chain dependent on one foundry in Taiwan.” One foundry! That’s just wild. But the main takeaway I have from the 2026 AI Index is that the state of AI right now is shot through with inconsistencies. As my colleague Michelle Kim put it today in her piece about the report : “If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock.” (The Stanford report notes that Google DeepMind’s top reasoning model, Gemini Deep Think, scored a gold medal in the International Math Olympiad but is unable to read analog clocks half the time.) Michelle does a great job covering the report’s highlights. But I wanted to dwell on a question that I can’t shake. Why is it so hard to know exactly what’s going on in AI right now? The widest gap seems to be between experts and non-experts. “AI experts and the general public view the technology’s trajectory very differently,” the authors of the AI Index write. “Assessing AI’s impact on jobs, 73% of U.S. experts are positive, compared with only 23% of the public, a 50 percentage point gap. Similar divides emerge with respect to the economy and medical care.” That’s a huge gap. What’s going on? What do experts know that the public doesn’t? (“Experts” here means US-based researchers who took part in AI conferences in 2023 and 2024.) I suspect part of what’s going on is that experts and non-experts base their views on very different experiences. “The degree to which you are awed by AI is perfectly correlated with how much you use AI to code,” a software developer posted on X the other day . Maybe that’s tongue-in-cheek, but there’s definitely something to it. The latest models from the top labs are now better than ever at producing code. Because technical tasks like coding have right or wrong results, it is easier to train models to do them, compared with tasks that are more open-ended. What’s more, models that can code are proving to be profitable, so model makers are throwing resources at improving them. This means that people who use those tools for coding or other technical work are experiencing this technology at its best. Outside of those use cases, you get more of a mixed bag. LLMs still make dumb mistakes. This phenomenon has become known as the “jagged frontier”: Models are very good at doing some things and less good at others. The influential AI researcher Andrej Karpathy also had some thoughts. “Judging by my [timeline] there is a growing gap in understanding of AI capability,” he wrote in reply to that X post. He noted that power users (read: people who use LLMs for coding, math, or research) not only keep up to date with the latest models but will often pay $200 a month for the best versions. “The recent improvements in these domains as of this year have been nothing short of staggering,” he continued. Because LLMs are still improving fast, someone who pays to use Claude Code will in effect be using a different technology from someone who tried using the free version of Claude to plan a wedding six months ago. Those two groups are speaking past each other. Where does that leave us? I think there are two realities. Yes, AI is far better than a lot of people realize. And yes, it is still pretty bad at a lot of stuff that a lot of people care about (and it may stay that way). Anyone making bets about the future on either side should bear that in mind.
Key takeaways
- The discrepancy between experts and the general public on AI could lead to poorly informed political and business decisions in Brazil.
- Global dependence on a single chip manufacturer, such as TSMC, highlights the vulnerability of the AI hardware supply chain.
- Promoting events and educational initiatives in AI is crucial to increase public understanding and align expectations.
Editorial analysis
The division of opinions on artificial intelligence (AI) is a phenomenon that deserves attention, especially in the Brazilian context, where the technology sector is rapidly evolving. The discrepancy between the perception of experts and the general public suggests that communication about AI needs to be clearer and more accessible. In Brazil, where the startup and innovation ecosystem is growing, this lack of understanding can lead to poorly informed political and business decisions, hindering technological advancement and its adoption in critical sectors such as healthcare and education.
Moreover, the global dependence on a single chip manufacturer, such as TSMC, highlights the vulnerability of the AI hardware supply chain. For Brazil, which seeks to position itself as a technology hub in Latin America, this represents an opportunity to invest in local capacity building and the development of alternative technologies. Diversifying hardware production could not only mitigate risks but also foster local innovation.
Another important point is the need for closer dialogue between experts and the public. Brazil has a strong academic community in AI, but the transfer of knowledge to civil society and the private sector remains a challenge. Promoting events, workshops, and educational initiatives in AI can help align expectations and increase understanding of the technology's potential and risks. This is crucial to ensure that the country does not fall behind in the global race for AI innovations.
Finally, it is essential to monitor how public policies and regulations regarding AI are developing in Brazil. With growing concerns about the impact of AI on employment and the economy, it is vital for policymakers to consider different perspectives and experiences when creating an environment that favors responsible and ethical innovation. The future of AI in Brazil will depend on balancing enthusiasm for technology with the caution necessary to mitigate its potential risks.
What this coverage includes
- Clear source attribution and link to the original publication.
- Editorial framing about relevance, impact, and likely next developments.
- Review for readability, context, and duplication before publication.
Original source:
MIT Technology Review AIAbout this article
This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.
Learn more about our editorial process