Around seven in 10 people in the UK say that laws and regulation would increase their comfort with AI, up from 62 per cent in 2023, according to a new study published by the Ada Lovelace Institute and the Alan Turing Institute.
The survey of more than 3,500 UK residents looked at the public’s awareness and perceptions of different uses of AI, their experience of harm, and their expectations in regard to governance and regulation.
The report follows a previous survey carried out in 2022, before the release of ChatGPT and other large language mode (LLM) based chatbots, which was published in 2023.
The survey found that public awareness of different AI uses varies. While 93 per cent have heard of driverless cars and 90 per cent are aware of facial recognition in policing, just 18 per cent were familiar with the use of AI for welfare benefits assessments.
The survey also found that LLMs are becoming more popular, with 61 per cent saying they have heard of LLMs and 40 per cent saying they have used them.
While many people recognise the benefits of the technology, respondents raised concerns about overreliance on technology and the lack of transparency in decision making.
Data is also a concern, with 83 per cent worried about public sector bodies sharing their data with private companies to train AI systems.
The survey also found that exposure to harms from AI is widespread with two thirds reporting that they have encountered some form of AI-related harm at least a few times. These harms included false information, financial fraud, and deepfakes.
Some 88 per cent of people surveyed believe it is important that the government or regulators have the power to stop the use of an AI product if it is deemed to pose a risk of serious harm to the public, while over 75 per cent said government or independent regulators, rather than private companies alone, should oversee AI safety.
Octavia Field Reid, associate director at the Ada Lovelace Institute, said that the report shows the government needs to take account of public expectations, concerns and experiences to develop AI responsibly.
“The government’s current inaction in legislating to address the potential risks and harms of AI technologies is in direct contrast to public concerns and a growing desire for regulation,” she added. “This gap between policy and public expectations creates a risk of backlash, particularly from minoritised groups and those most affected by AI harms, which would hinder the adoption of AI and the realisation of its benefits.
“There will be no greater barrier to delivering on the potential of AI than a lack of public trust.”