Artificial Intelligence

How to prepare for and remediate an AI system incident

Published byAIDaily Editorial Team
4 min read
Original source author: David Thomas

For all the possibilities AI gives us, there is always a chance of the technology malfunctioning or becoming compromised. In the event of an AI system crisis, new research from ISACA has found that the majority of organisations surveyed couldn’t explain how quickly they could stop an AI system emergency, or even report on what […] The post How to prepare for and remediate an AI system incident appeared first on AI News .

Share:

For all the possibilities AI gives us, there is always a chance of the technology malfunctioning or becoming compromised. In the event of an AI system crisis, new research from ISACA has found that the majority of organisations surveyed couldn’t explain how quickly they could stop an AI system emergency, or even report on what caused the issue. According to ISACA’s report , 59% of digital trust professionals didn’t understand how quickly their organisation could interrupt and halt an AI system during a security incident. Just 21% reported that they could meaningfully step in in half an hour. The indicates a landscape where corrupted AI systems can continue to operate unchecked, leading to a risk of irreversible damage. Ali Sarrafi, CEO & Founder of Kovant , an autonomous enterprise platform, said, “ISACA’s findings point to a major structural issue in the way that organisations are deploying AI. Systems are being embedded into critical workflows without the governance layer needed to supervise and audit their actions. If a business cannot quickly halt an AI system, explain its behaviour, or even identify who is to be held accountable, the business is not in control of that system.” AI failures and risks In all, only 42% of respondents expressed any confidence in their organisation being able to analyse and clarify serious AI incidents, thus leading to possible operational failures and security risks. Moreover, without explaining these incidents to regulators and leadership, businesses may face legal penalties and public backlash. Proper analysis is needed to learn from mistakes. Without a clear understanding, the likelihood of repeated incidents only increases. It’s important is to manage AI responsibly, with effective AI governance, yet ISACA’s findings indicate this is often missing. Accountability is another fuzzy area with 20% reporting that they do not know who would be responsible if an AI system caused damage. Just 38% identified the Board or an Executive as ultimately responsible. Sarrafi noted that slowing down AI adoption is not the answer; instead, rethinking how it is managed is key. “AI systems need to sit in a structured management layer that treats them as digital employees, with clear ownership, defined escalation paths, and the ability to be paused or overridden instantly when risk thresholds are crossed. The way, agents stop being mysterious bots and become systems you can inspect and trust. As AI becomes more deeply embedded in core business functions, governance cannot be an afterthought. It has to be built into the architecture from day one, with visibility and control designed in at every level. The organisations that get this right will not reduce risk, they will be the ones that can confidently scale AI in the business.” There is some reassurance, however, with 40% of respondents saying humans approve almost all AI actions before being deployed, and a further 26% evaluate AI outcomes. That being said, without an improved governance infrastructure, human oversight is unlikely to be enough to identify and resolve issues before escalating. ISACA’s findings point towards a major structural issue in how AI is being deployed in different sectors. With over a third of organisations not requiring their employees to disclose where and when AI is used in work products, the potential for blind spots increases. Despite more stringent regulations that make senior leadership more accountable, organisations are failing to implement and use AI safely and effectively. It seems many businesses are treating AI risk as a technical problem, not as something that requires careful management in the entire organisation. Change to how the integration and actions of AI are handled is essential. Without proper governance and accountability, businesses are not in control of their AI systems. Without control, even the smallest errors could cause reputational and financial harm that many businesses may not recover from. (Image by Foundry Co from Pixabay ) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media . Explore other upcoming enterprise technology events and webinars here . The post How to prepare for and remediate an AI system incident appeared first on AI News .

Key takeaways

  • Most Brazilian organizations lack a clear plan to halt AI systems in crisis.
  • The lack of accountability in AI failures could lead to legal complications and loss of public trust.
  • It is essential for companies to reassess their AI governance practices to mitigate risks and ensure accountability.

Editorial analysis

ISACA's research highlights a growing concern regarding the governance of AI systems, especially in a context where Brazil is accelerating its adoption of these technologies. The lack of clarity on how to halt an AI system in crisis may indicate that many Brazilian companies have yet to develop a robust framework for managing these digital assets. This is alarming, as AI is becoming increasingly integrated into critical processes, and the inability to respond quickly to incidents can result in irreparable damage, both financially and reputationally.

Moreover, the issue of accountability in cases of AI failures is crucial. With only 38% of respondents identifying senior management as responsible, it is evident that many organizations have not established a culture of accountability regarding AI. In Brazil, where legislation on data protection and digital responsibility is evolving, this lack of clarity could lead to legal complications and increased public distrust in the use of AI.

The current scenario demands that Brazilian companies reassess not only the speed of AI adoption but also how they manage these systems. The proposal to treat AI systems as "digital employees" with oversight structures and response protocols is an approach that could help mitigate risks. As technology advances, the need for effective governance becomes even more pressing, especially in a country that seeks to position itself as a leader in technological innovation in Latin America.

Finally, the implications for the Brazilian tech ecosystem are vast. As more startups and established companies incorporate AI into their operations, the need for governance frameworks and risk management becomes critical. What to watch for next is how organizations will respond to these findings and whether they will implement significant changes in their AI management practices to prevent future crises.

What this coverage includes

  • Clear source attribution and link to the original publication.
  • Editorial framing about relevance, impact, and likely next developments.
  • Review for readability, context, and duplication before publication.

Original source:

AI News

About this article

This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.

Learn more about our editorial process