Supercharged scams
When ChatGPT was released to the public in late 2022, it opened people’s eyes to how easily generative AI could churn out vast amounts of human-seeming text from simple prompts. This quickly caught the attention of criminals, who soon began using large language models to produce malicious emails—both the untargeted spam kind and more sophisticated,…
When ChatGPT was released to the public in late 2022, it opened people’s eyes to how easily generative AI could churn out vast amounts of human-seeming text from simple prompts. This quickly caught the attention of criminals, who soon began using large language models to produce malicious emails —both the untargeted spam kind and more sophisticated, targeted attacks designed to steal money and sensitive information. Since then, cybercriminals have adopted AI tools to supercharge their operations . They’ve used the technology to do everything from composing phishing emails and creating hyperrealistic, convincing deepfake clips to tweaking malicious software (commonly known as malware) so it is harder to detect. They can also use AI to automate the search for vulnerabilities in networks and computer systems , quickly generate ransom notes, and analyze vast swathes of stolen data to pinpoint what’s most valuable. AI’s impact on hacking itself is not so clear-cut. But we do know that AI is lowering the barriers for would-be attackers, providing them with an ever-evolving arsenal of new capabilities, and making it faster, cheaper, and easier than ever before for them to try to infiltrate their targets. For example, scam centers across Southeast Asia are embracing inexpensive AI tools to quickly target greater numbers of potential victims and to swiftly switch to new locations, Interpol has warned. Similarly, the United Arab Emirates recently claimed to have foiled a series of shadowy AI-backed attacks on its vital sectors. And because these spammy, scattergun attacks can be pumped out at a colossal scale, they don’t need to be very sophisticated to have the desired effect—just lucky enough to get into a machine that happens to be undefended, or into the inbox of an unsuspecting victim at the right time. Many organizations are already struggling to cope with the sheer volume of cyberattacks targeting them. The problem is likely to get significantly worse as increasing numbers of criminals try their luck, and as the capabilities of publicly available generative AI systems continue to improve. Earlier this month, AI company Anthropic claimed that Mythos, a model it’s developed and is now testing, found thousands of critical vulnerabilities, including some in every major operating system and web browser. Anthropic says all of them have been patched, but it’s delaying the model’s release as a result of these new capabilities and set up a consortium of tech companies called Project Glasswing that it says will try to put these capabilities to work for defensive purposes in the meantime. Right now, cybersecurity researchers are optimistic that sloppier attacks can be thwarted through basic defenses, highlighting just how important it is to keep on top of software updates and stick to network security protocols. How well positioned we’ll be to ward off more sophisticated attacks in the future is much less clear. The good news is that AI is also being used to defend. Each day, Microsoft—just one of the many businesses keeping tabs on such threats—processes more than 100 trillion signals flagged by its AI systems as potentially malicious or suspicious. The company says that between April 2024 and April 2025, it managed to block $4 billion worth of scams and fraudulent transactions, many of which may have been aided by AI content. The same technology that makes such attacks possible could also be our best bet at keeping us safe in years to come.
Key takeaways
- The adoption of AI by cybercriminals poses a growing challenge to digital security in Brazil.
- Brazilian organizations need to invest in cybersecurity defense technologies to protect against online fraud.
- Collaboration between the public and private sectors will be crucial to develop guidelines that protect citizens and encourage innovation.
Editorial analysis
The rise of generative AI tools, such as ChatGPT, has not only revolutionized how we interact with technology but has also opened new avenues for criminal activities. In Brazil, where digitalization is rapidly advancing, the adoption of AI by cybercriminals poses a significant challenge for businesses and institutions. The ability to easily generate convincing malicious emails may lead to an increase in online fraud, especially in a country where trust in digital communications is still developing. This necessitates that Brazilian organizations reassess their cybersecurity strategies and adopt more robust technologies to protect against these emerging threats.
Moreover, the use of AI to automate the search for vulnerabilities in systems could result in a scenario where attacks become more frequent and sophisticated. Brazilian companies, particularly those operating in critical sectors, must remain vigilant to these changes and consider implementing AI solutions to detect and mitigate risks before they become real problems. Investment in training and cybersecurity defense technologies will be crucial to maintaining data integrity and consumer trust.
Finally, it is important to observe how regulations and public policies may evolve in response to this new reality. The Brazilian government, along with technology companies, should work together to establish guidelines that not only protect citizens but also encourage responsible innovation in AI usage. The future of cybersecurity in Brazil will depend on collaboration between the private and public sectors, as well as ongoing awareness of the threats that technology can bring.
What this coverage includes
- Clear source attribution and link to the original publication.
- Editorial framing about relevance, impact, and likely next developments.
- Review for readability, context, and duplication before publication.
Original source:
MIT Technology Review AIAbout this article
This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.
Learn more about our editorial process