LLMs

Artificial scientists

Published byAIDaily Editorial Team
3 min read
Original source author: Grace Huckins

AI companies frequently invoke the possibility of AI-enabled scientific discovery as a justification for their existence: If the technology eventually cures cancer and solves climate change, then all the carbon emissions and slop videos will have been well worth it. Already, LLMs can assist scientists in all sorts of ways. They can point people to…

Share:

AI companies frequently invoke the possibility of AI-enabled scientific discovery as a justification for their existence: If the technology eventually cures cancer and solves climate change, then all the carbon emissions and slop videos will have been well worth it. Already, LLMs can assist scientists in all sorts of ways. They can point people to relevant studies in the literature, draft journal articles, and, of course, write code. But AI companies and academic researchers alike have a much more ambitious vision for AI co-scientists. They want to develop systems that can act as a full member of a scientific team or, even more ambitiously, initiate and carry out research projects with limited human guidance. Google DeepMind has invested heavily in scientific AI for years, and it paid off in 2024 when Demis Hassabis and John Jumper, the company’s CEO and director, won the Nobel Prize in chemistry for AlphaFold, a specialized system that can predict the three-dimensional structure of a protein. Now its competitors are working to catch up. In October 2025, OpenAI launched a team devoted to AI for science, and Anthropic announced several Claude features geared toward the biological sciences around the same time. OpenAI in particular has called building an autonomous researcher its “North Star.” It just announced GPT‑Rosalind, the first in a planned series of specialized scientific models. Google released its own AI co-scientist tool last February. Under the hood, many of these AI-for-science systems are in fact multiple specialized AI agents working in concert. Google’s co-scientist uses a supervisor agent, a generation agent, and a ranking agent, among several others, in order to generate potential hypotheses and research plans in response to a goal provided by a human scientist. More recently, researchers at Stanford’s AI for Science Lab, led by James Zou, devised a “virtual lab” made up of agents that took on the roles of specialists in different scientific fields. They found that their system could design new antibody fragments that bind to SARS-CoV-2, the virus that causes covid. Unlike human scientists, however, those teams of agents can’t yet go out and test their ideas in the lab. To overcome that limitation, some researchers are plugging LLMs into experiment-running robots. In February, OpenAI announced that it had connected GPT-5 directly with automated biological laboratories built by the company Ginkgo Bioworks so that the AI system could iteratively propose experiments and interpret the results with limited human involvement. This approach allowed the system to run a gargantuan number of experiments and create a recipe that reduced the cost of synthesizing a particular protein by 40%. AI-powered science seems like a win for frontier labs and for society at large. But research suggests it could have unintended consequences. A recent Nature study found that while individual scientists see professional advantages from adopting AI, science on the whole may suffer, because AI reduces the scope of what the scientific community investigates. That might be because AI is especially good at analyzing preexisting data sets and literature, so scientists who use it gravitate toward established topic areas where large-scale data is available. That could leave fewer scientists to study problems less amenable to AI. Integrating AI effectively into science is more than just a technical problem: Maintaining the vibrance and diversity of science in the AI era may require concerted effort from the scientific community.

Key takeaways

  • AI can accelerate scientific discoveries in Brazil, especially in health and sustainability.
  • Competition among tech companies may stimulate innovation in the Brazilian ecosystem.
  • It is crucial to address the ethical implications of using AI in scientific research.

Editorial analysis

The rise of artificial intelligence as a collaborator in scientific research represents a paradigm shift that could significantly impact the technology sector in Brazil. With the increasing capabilities of language models and AI systems like AlphaFold, the possibility of accelerating scientific discoveries becomes more tangible. This is particularly relevant for Brazil, which faces challenges in areas such as public health and environmental sustainability. The implementation of AI in laboratories and research centers can not only optimize processes but also democratize access to advanced technologies, allowing Brazilian researchers to compete on equal footing with their international counterparts.

Moreover, the competition among tech giants like Google and OpenAI to develop autonomous scientific research systems could stimulate an innovation ecosystem in Brazil. Brazilian startups and universities have the opportunity to explore partnerships and develop their own AI solutions, leveraging local expertise and the specific needs of the country. Collaboration between academia and industry will be crucial to ensure that Brazil does not fall behind in this technological race.

However, it is essential to consider the ethical and social implications of using AI in science. The automation of research processes raises questions about accountability and transparency in science. How can we ensure that discoveries made by AI systems are interpreted and applied ethically? Brazil, with its diversity and social complexity, must be vigilant about these issues to prevent technology from amplifying existing inequalities. The future of scientific research in Brazil may be bright, but it requires a careful and reflective approach to the role of AI.

Finally, what to watch for in the coming years includes the development of regulations governing the use of AI in science, as well as the evolution of these systems' capabilities. Brazil should prepare to integrate these technologies into its research institutions while promoting public discussion about their implications. The balance between innovation and responsibility will be crucial for the success of AI in Brazilian science.

What this coverage includes

  • Clear source attribution and link to the original publication.
  • Editorial framing about relevance, impact, and likely next developments.
  • Review for readability, context, and duplication before publication.

Original source:

MIT Technology Review AI

About this article

This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.

Learn more about our editorial process