A Norwegian man is demanding OpenAI be fined by the country’s data regulator after ChatGPT falsely reported that he had killed two of his sons and spent 21 years in prison.
Arve Hjalmar Holmen reported the incident to the Norwegian Data Protection Authority, filing a complaint that accuses the chatbot of defamation and damage to personal reputation.
Holme said the chatbot shared the hallucinatory response when he used the model to search for “Who is Arve Hjalmar Holmen?”, with ChatGPT making up a fake story that labelled him a convicted murderer.
ChatGPT’s response alleged: “Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”
“Some people think there is no smoke without fire – the fact that someone might read this output and believe it to be true is what scares me the most,” said Holmen in a statement.
Hallucinations occur when AI chatbots present false information as fact and are one of the main problems computer scientists are trying to solve when it comes to generative AI.
Digital rights group Noyb, the organisation that filed Holmes’ complaint, pointed out that by knowingly allowing ChatGPT to produce defamatory results, OpenAI violates the GDPR’s data accuracy principle, adding that the case is not isolated.
“In the past, ChatGPT has falsely accused people of corruption, child abuse or even murder,” the organisation added.
The organisation said that it is not enough to show ChatGPT users a disclaimer that the chatbot can make mistakes, and called on AI companies to stop acting as if GDPR does not apply to them, pointing out the reputational damage people can suffer when such mistakes occur.
Joakim Söderberg, data protection lawyer at noyb explained: “ You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
He added: “The GDPR is clear. Personal data has to be accurate. And if it’s not, users have the right to have it changed to reflect the truth.”
OpenAI responded by saying that the incident happened on a previous ChatGPT model, which has now been updated.
“Unfortunately, it also seems like OpenAI is neither interested nor capable to seriously fix false information in ChatGPT,” continued noyb.
In April 2024, noyb filed its first complaint concerned with hallucinations, requesting to rectify or erase the incorrect date of birth of a public figure.
“OpenAI simply argued it couldn’t correct data,” said the organisation. “Instead, it can only “block” data on certain prompts, but the false information still remains in the system.”