Business leaders still have “significant concerns” about the use generative AI (genAI) despite the advances the technology could offer, according to new research by consulting firm KPMG UK.
In a survey, 60 per cent of respondents said they were concerned about the inaccuracy of results when adopting genAI. This includes hallucinations, when a large language model (LLM) generates false or misleading information.
The research also found that half of organisations are worried about errors in the underlying data and information which could influence the model’s outputs, while 50 per cent of respondents the respondents said they are worried about cybersecurity issues.
Around a third of organisations said they had published responsible genAI usage guidelines to mitigate potential threats.
The report said algorithmic bias is another major issue of 43 per cent of respondents, but only eight per cent said they had processes in place to measure it.
Almost a quarter of businesses have genAI training in place or are currently developing training schemes. Yet KMPG found that 68 per cent of respondents were self-taught by individual directors.
“Given boards’ concerns, it’s important that companies thoughtfully define a clear AI strategy rather than merely chase the next technological innovation,” said Leanne Allen, head of AI at KPMG UK. “This strategy should balance the value, cost, and risk associated with AI use cases. This strategic equilibrium is crucial for both progress and stakeholder trust.”