Anthropic launches code review tool to check flood of AI-generated code
Anthropic launched Code Review in Claude Code, a multi-agent system that automatically analyzes AI-generated code, flags logic errors, and helps enterprise developers manage the growing volume of code produced with AI.
When it comes to coding, peer feedback is crucial for catching bugs early, maintaining consistency across a codebase, and improving overall software quality.
The rise of “vibe coding” — using AI tools that take instructions given in plain language and quickly generate large amounts of code — has changed how developers work. While these tools have sped up development, they have also introduced new bugs, security risks, and poorly understood code.
Anthropic’s solution is an AI reviewer designed to catch bugs before they make it into the software’s codebase. The new product, called Code Review, launched Monday in Claude Code .
“We’ve seen a lot of growth in Claude Code, especially within the enterprise, and one of the questions that we keep getting from enterprise leaders is: Now that Claude Code is putting up a bunch of pull requests, how do I make sure that those get reviewed in an efficient manner?” Cat Wu, Anthropic’s head of product, told TechCrunch.
Pull requests are a mechanism that developers use to submit code changes for review before those changes make it into the software. Wu said Claude Code has dramatically increased code output, which has increased pull request reviews that have caused a bottleneck to shipping code.
“Code Review is our answer to that,” Wu said.
Anthropic’s launch of Code Review — arriving first to Claude for Teams and Claude for Enterprise customers in research preview — comes at a pivotal moment for the company.
Disrupt 2026: The tech ecosystem, all in one room
Save up to $300 or 30% to TechCrunch Founder Summit
On Monday, Anthropic filed two lawsuits against the Department of Defense in response to the agency’s designation of Anthropic as a supply chain risk. The dispute will likely see Anthropic leaning more heavily on its booming enterprise business, which has seen subscriptions quadruple since the start of the year. Claude Code ’s run-rate revenue has surpassed $2.5 billion since launch, according to the company.
“This product is very much targeted towards our larger scale enterprise users, so companies like Uber, Salesforce, Accenture, who already use Claude Code and now want help with the sheer amount of [pull requests] that it’s helping produce,” Wu said.
She added that developer leads can turn on Code Review to run on default for every engineer on the team. Once enabled, it integrates with GitHub and automatically analyzes pull requests, leaving comments directly on the code explaining potential issues and suggested fixes.
The focus is on fixing logical errors over style, Wu said.
“This is really important because a lot of developers have seen AI automated feedback before, and they get annoyed when it’s not immediately actionable,” Wu said. “We decided we’re going to focus purely on logic errors. This way we’re catching the highest priority things to fix.”
The AI explains its reasoning step by step, outlining what it thinks the issue is, why it might be problematic, and how it can potentially be fixed. The system will label the severity of issues using colors: red for highest severity, yellow for potential problems worth reviewing, and purple for issues tied to pre-existing code or historical bugs.
Wu said it does this quickly and efficiently by relying on multiple agents working in parallel, with each agent examining the codebase from a different perspective or dimension. A final agent aggregates and ranks the findings, removing duplicates and prioritizing what’s most important.
The tool provides a light security analysis , and engineering leads can customize additional checks based on internal best practices. Wu said Anthropic’s more recently launched Claude Code Security provides a deeper security analysis.
The multi-agent architecture does mean this can be a resource-intensive product, Wu said. Similar to other AI services, pricing is token-based, and the cost varies depending on code complexity — though Wu estimated each review would cost $15 to $25 on average. She added that it’s a premium experience, and a necessary one as AI tools generate more and more code.
“[Code Review] is something that’s coming from an insane amount of market pull,” Wu said. “As engineers develop with Claude Code, they’re seeing the friction to creating a new feature [decrease], and they’re seeing a much higher demand for code review. So we’re hopeful that with this, we’ll enable enterprises to build faster than they ever could before, and with much fewer bugs than they ever had before.”
Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications.
You can contact or verify outreach from Rebecca by emailing rebecca.bellan@techcrunch.com or via encrypted message at rebeccabellan.491 on Signal.
Actively scaling? Fundraising? Planning your next launch? TechCrunch Founder Summit 2026 delivers tactical playbooks and direct access to 1,000+ founders and investors who are building, backing, and closing. Register by March 13 to save up to $300.
Cluely CEO Roy Lee admits to publicly lying about revenue numbers last year Julie Bort
Cluely CEO Roy Lee admits to publicly lying about revenue numbers last year
Cluely CEO Roy Lee admits to publicly lying about revenue numbers last year
Cursor is rolling out a new kind of agentic coding tool Russell Brandom
Cursor is rolling out a new kind of agentic coding tool
Cursor is rolling out a new kind of agentic coding tool
Meta sued over AI smart glasses’ privacy concerns, after workers reviewed nudity, sex, and other footage Sarah Perez
Meta sued over AI smart glasses’ privacy concerns, after workers reviewed nudity, sex, and other footage
Meta sued over AI smart glasses’ privacy concerns, after workers reviewed nudity, sex, and other footage
Jensen Huang says Nvidia is pulling back from OpenAI and Anthropic, but his explanation raises more questions than it answers Connie Loizos
Jensen Huang says Nvidia is pulling back from OpenAI and Anthropic, but his explanation raises more questions than it answers
Jensen Huang says Nvidia is pulling back from OpenAI and Anthropic, but his explanation raises more questions than it answers
Anthropic CEO Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies,’ report says Amanda Silberling
Anthropic CEO Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies,’ report says
Anthropic CEO Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies,’ report says
Father sues Google, claiming Gemini chatbot drove son into fatal delusion Rebecca Bellan
Father sues Google, claiming Gemini chatbot drove son into fatal delusion
Father sues Google, claiming Gemini chatbot drove son into fatal delusion
Audible launches a cheaper ‘Standard’ subscription plan, challenging Spotify Aisha Malik
Audible launches a cheaper ‘Standard’ subscription plan, challenging Spotify
Audible launches a cheaper ‘Standard’ subscription plan, challenging Spotify
O que esta cobertura entrega
- Atribuicao clara de fonte com link para a publicacao original.
- Enquadramento editorial sobre relevancia, impacto e proximos desdobramentos.
- Revisao de legibilidade, contexto e duplicacao antes da publicacao.
Fonte original:
TechCrunch AISobre este artigo
Este artigo foi curado e publicado pelo AIDaily como parte da nossa cobertura editorial sobre desenvolvimentos em inteligência artificial. O conteúdo é baseado na fonte original citada abaixo, enriquecido com contexto e análise editorial. Ferramentas automatizadas podem auxiliar tradução e estruturação inicial, mas a decisão de publicar, a revisão factual e o enquadramento de contexto seguem responsabilidade editorial.
Saiba mais sobre nosso processo editorial