Why opinion on AI is so divided
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. In an industry that doesn’t stand still, Stanford’s AI Index, an annual roundup of key results and trends, is a chance to take a breath. (It’s a marathon, not a sprint, after…
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here . In an industry that doesn’t stand still, Stanford’s AI Index, an annual roundup of key results and trends , is a chance to take a breath. (It’s a marathon, not a sprint , after all.) This year’s report , which dropped today, is full of striking stats. A lot of the value comes from having numbers to back up gut feelings you might already have, such as the sense that the US is gunning harder for AI than everyone else: It hosts 5,427 data centers (and counting). That’s more than 10 times as many as any other country. There’s also a reminder that the hardware supply chain the AI industry relies on has some major choke points. Here’s perhaps the most remarkable fact: “A single company, TSMC, fabricates almost every leading AI chip, making the global AI hardware supply chain dependent on one foundry in Taiwan.” One foundry! That’s just wild. But the main takeaway I have from the 2026 AI Index is that the state of AI right now is shot through with inconsistencies. As my colleague Michelle Kim put it today in her piece about the report : “If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock.” (The Stanford report notes that Google DeepMind’s top reasoning model, Gemini Deep Think, scored a gold medal in the International Math Olympiad but is unable to read analog clocks half the time.) Michelle does a great job covering the report’s highlights. But I wanted to dwell on a question that I can’t shake. Why is it so hard to know exactly what’s going on in AI right now? The widest gap seems to be between experts and non-experts. “AI experts and the general public view the technology’s trajectory very differently,” the authors of the AI Index write. “Assessing AI’s impact on jobs, 73% of U.S. experts are positive, compared with only 23% of the public, a 50 percentage point gap. Similar divides emerge with respect to the economy and medical care.” That’s a huge gap. What’s going on? What do experts know that the public doesn’t? (“Experts” here means US-based researchers who took part in AI conferences in 2023 and 2024.) I suspect part of what’s going on is that experts and non-experts base their views on very different experiences. “The degree to which you are awed by AI is perfectly correlated with how much you use AI to code,” a software developer posted on X the other day . Maybe that’s tongue-in-cheek, but there’s definitely something to it. The latest models from the top labs are now better than ever at producing code. Because technical tasks like coding have right or wrong results, it is easier to train models to do them, compared with tasks that are more open-ended. What’s more, models that can code are proving to be profitable, so model makers are throwing resources at improving them. This means that people who use those tools for coding or other technical work are experiencing this technology at its best. Outside of those use cases, you get more of a mixed bag. LLMs still make dumb mistakes. This phenomenon has become known as the “jagged frontier”: Models are very good at doing some things and less good at others. The influential AI researcher Andrej Karpathy also had some thoughts. “Judging by my [timeline] there is a growing gap in understanding of AI capability,” he wrote in reply to that X post. He noted that power users (read: people who use LLMs for coding, math, or research) not only keep up to date with the latest models but will often pay $200 a month for the best versions. “The recent improvements in these domains as of this year have been nothing short of staggering,” he continued. Because LLMs are still improving fast, someone who pays to use Claude Code will in effect be using a different technology from someone who tried using the free version of Claude to plan a wedding six months ago. Those two groups are speaking past each other. Where does that leave us? I think there are two realities. Yes, AI is far better than a lot of people realize. And yes, it is still pretty bad at a lot of stuff that a lot of people care about (and it may stay that way). Anyone making bets about the future on either side should bear that in mind.
Pontos-chave
- A discrepância entre especialistas e o público geral sobre IA pode levar a decisões políticas e empresariais mal fundamentadas no Brasil.
- A dependência global de um único fabricante de chips, como a TSMC, destaca a vulnerabilidade da cadeia de suprimentos de hardware para IA.
- A promoção de eventos e iniciativas de educação em IA é crucial para aumentar a compreensão pública e alinhar expectativas.
Análise editorial
A divisão de opiniões sobre a inteligência artificial (IA) é um fenômeno que merece atenção, especialmente no contexto brasileiro, onde o setor de tecnologia está em rápida evolução. A discrepância entre a percepção de especialistas e do público geral sugere que a comunicação sobre IA precisa ser mais clara e acessível. No Brasil, onde o ecossistema de startups e inovação está crescendo, essa falta de entendimento pode levar a decisões políticas e empresariais mal fundamentadas, prejudicando o avanço da tecnologia e sua adoção em setores críticos como saúde e educação.
Além disso, a dependência global de um único fabricante de chips, como a TSMC, destaca a vulnerabilidade da cadeia de suprimentos de hardware para IA. Para o Brasil, que busca se posicionar como um hub de tecnologia na América Latina, isso representa uma oportunidade para investir em capacitação local e desenvolvimento de tecnologias alternativas. A diversificação da produção de hardware poderia não apenas mitigar riscos, mas também fomentar a inovação local.
Outro ponto importante é a necessidade de um diálogo mais próximo entre especialistas e o público. O Brasil tem uma comunidade acadêmica forte em IA, mas a transferência de conhecimento para a sociedade civil e o setor privado ainda é um desafio. Promover eventos, workshops e iniciativas de educação em IA pode ajudar a alinhar as expectativas e aumentar a compreensão sobre o potencial e os riscos da tecnologia. Isso é crucial para garantir que o país não fique para trás na corrida global por inovações em IA.
Por fim, é essencial monitorar como as políticas públicas e as regulamentações sobre IA estão se desenvolvendo no Brasil. Com a crescente preocupação sobre o impacto da IA no emprego e na economia, é fundamental que os formuladores de políticas considerem as diferentes perspectivas e experiências ao criar um ambiente que favoreça a inovação responsável e ética. O futuro da IA no Brasil dependerá de um equilíbrio entre o entusiasmo pela tecnologia e a cautela necessária para mitigar seus riscos potenciais.
O que esta cobertura entrega
- Atribuicao clara de fonte com link para a publicacao original.
- Enquadramento editorial sobre relevancia, impacto e proximos desdobramentos.
- Revisao de legibilidade, contexto e duplicacao antes da publicacao.
Fonte original:
MIT Technology Review AISobre este artigo
Este artigo foi curado e publicado pelo AIDaily como parte da nossa cobertura editorial sobre desenvolvimentos em inteligência artificial. O conteúdo é baseado na fonte original citada abaixo, enriquecido com contexto e análise editorial. Ferramentas automatizadas podem auxiliar tradução e estruturação inicial, mas a decisão de publicar, a revisão factual e o enquadramento de contexto seguem responsabilidade editorial.
Saiba mais sobre nosso processo editorial