European organisations are adopting artificial intelligence systems at a rapid pace, but are failing to implement adequate governance and safety controls to keep them in check.
That is the finding of a new study from IT governance trade association ISACA, which surveyed 681 European technology and business professionals between 6 and 22 February.
Alarmingly, 59 per cent of respondents admitted they were unsure whether their organisation would be capable of stopping an AI system if it were impacted by a security incident.
Just over one-fifth of respondents (21 per cent) said they would be able to do this within 30 minutes. ISACA says this indicates that, if an AI system were “compromised or malfunctioning”, most organisations would be unable to do anything for at least half an hour.
The core reason behind organisations’ inability to rein in problematic AI systems appears to be that they are not taking their governance obligations seriously enough.
Currently, 33 per cent of organisations have no formal rule or procedure in place compelling employees to disclose whether they have used AI at work.
Meanwhile, 20 per cent of organisations are unsure who to hold accountable when AI systems fail. Just 38 per cent said this responsibility would lie with the company’s board or an executive.
According to ISACA, these statistics are concerning because the European Union’s AI Act requires organisations deploying AI systems to be transparent about how they use AI to help improve employees’ understanding, and to be accountable when issues arise.
There is also a growing expectation among global regulators that leadership teams should be the parties held accountable for AI-related issues, showing that the safety of this technology is now a top boardroom priority.
When it comes to AI oversight, the study paints a slightly better picture. Forty per cent of organisations have implemented rules ensuring AI systems cannot make decisions without prior approval from a human. That aligns with regulatory expectations.
Chris Dimitriadis, chief global strategy officer at ISACA, said: “What this research reflects is that our thirst to innovate is not matched by our desire to govern change, exposing us to critical risks.
“The tools to govern AI responsibly already exist. Risk management, prevention controls, detection mechanisms, incident response and recovery strategies are the foundations of good cybersecurity practice, and they need to be applied to AI with the same rigour and urgency.”





