OpenAI is once again under fire in Europe as another privacy complaint targets ChatGPT’s dangerous tendency to generate false and defamatory information about individuals. This time, privacy advocacy group Noyb is backing a Norwegian man after the AI falsely accused him of murdering his own children.
According to Noyb, ChatGPT wrongly claimed that Arve Hjalmar Holmen had been convicted of killing two of his children and attempting to murder the third — a shocking fabrication with no basis in reality. Although previous complaints involved errors like wrong birth dates or inaccurate biographies, this case raises serious concerns about the potential harm from AI-generated hallucinations.
The problem, Noyb points out, is that OpenAI offers no proper way for users to correct false personal data. Instead, the company typically opts to block certain prompts. However, under the European Union’s General Data Protection Regulation (GDPR), users have the right to rectify inaccurate personal data — a right OpenAI appears to be ignoring.
Joakim Söderberg, a data protection lawyer at Noyb, emphasized this point, stating: “The GDPR is clear. Personal data has to be accurate. If it’s not, users have the right to correct it. Simply adding a disclaimer that ChatGPT may be wrong is not good enough. You can’t just spread lies and then hide behind a warning.”
GDPR violations carry hefty penalties, with fines reaching up to 4% of a company’s global annual turnover. Beyond financial consequences, enforcement could also force AI companies like OpenAI to rethink how their tools handle personal data.
A similar case in 2023 saw Italy’s data protection watchdog temporarily block ChatGPT, prompting OpenAI to update its transparency measures. The Italian authority later fined OpenAI €15 million for processing personal data without a valid legal basis.
Since then, however, European privacy regulators have been cautious about directly challenging generative AI (GenAI) tools. Ireland’s Data Protection Commission (DPC), which is handling an earlier Noyb complaint, has advised against rushing into bans, instead urging careful assessment of how GDPR applies to AI systems.
Poland’s data protection authority has also been investigating a separate ChatGPT privacy complaint since September 2023 — yet no decision has been reached.
Despite these delays, Noyb’s latest complaint aims to jolt regulators into action, highlighting the severe risks AI hallucinations pose to people’s reputations.
Noyb shared screenshots of the disturbing exchange where ChatGPT responded to the prompt “Who is Arve Hjalmar Holmen?” by inventing a horrifying story. The AI falsely claimed Holmen had been sentenced to 21 years in prison for killing his two sons. Disturbingly, the AI also got some real-life details right — it correctly listed the number and genders of his children and even named his hometown. This blend of truth and fiction made the fabricated crime even more unsettling.
Noyb investigated thoroughly, checking news archives to ensure there wasn’t a case of mistaken identity. But they found no record to justify the AI’s deadly story. A Noyb spokesperson expressed concern over how ChatGPT hallucinated such specific — and damaging — information.
Experts suggest that ChatGPT’s large language model might have been influenced by the frequency of filicide stories in its training data, causing it to generate a sensationalized response when prompted about an unknown individual.
Regardless of the reason, Noyb argues this is unacceptable and illegal under EU data protection laws. OpenAI’s disclaimer that “ChatGPT can make mistakes. Check important info” doesn’t exempt it from legal responsibilities, Noyb insists.
This isn’t the first time ChatGPT has fabricated serious allegations. Other examples include an Australian mayor falsely linked to a bribery scandal and a German journalist wrongly labeled a child abuser. These repeated failures prove that the issue isn’t isolated — ChatGPT’s hallucinations can and do cause real reputational harm.
Interestingly, Noyb noted that after a model update, ChatGPT stopped generating the false story about Holmen. Instead, it pulled information from the internet when answering queries about him — a change that might reduce future hallucinations. When TechCrunch tested it, ChatGPT returned mixed results, from random images to identifying Holmen as a Norwegian musician with fictional albums.
Yet, both Holmen and Noyb worry that defamatory content about him could still linger within the AI’s training data, posing future risks.
“Adding a disclaimer doesn’t mean the law disappears,” said Noyb lawyer Kleanthi Sardeli. “AI companies can’t secretly process false information while showing sanitized responses to users. If hallucinations continue, real people suffer real reputational damage.”
Noyb has filed its latest complaint with Norway’s data protection authority, arguing that OpenAI’s U.S. headquarters — not just its Irish office — should be held accountable. However, an earlier Noyb complaint filed in Austria was referred to Ireland’s DPC because OpenAI named its Irish unit as ChatGPT’s service provider in Europe.
That case has stalled. Ireland’s DPC confirmed it received the complaint in September 2024 and is still investigating. There’s no timeline for when the probe might conclude. OpenAI will also be looking to grab a conclusive victory in its legal battle against Elon Musk over profit grab.
As AI tools like ChatGPT become increasingly popular, this latest GDPR challenge could force regulators across Europe to finally confront the legal and ethical risks of AI hallucinations — and the potential damage they cause to real people.