AI hallucinations have crossed a disturbing new threshold as OpenAI faces legal action over ChatGPT's fabrication of horrific crimes. What happens when an AI chatbot doesn't just make up harmless trivia, but falsely accuses you of murdering your own children? One Norwegian man has found himself at the center of this nightmarish scenario, raising serious questions about AI accountability and data protection.
The Disturbing Incident
Arve Hjalmar Holmen, a Norwegian citizen, was shocked when he decided to query ChatGPT about himself. The AI confidently responded with a fabricated story claiming Holmen had murdered two of his sons and attempted to kill his third child. The chatbot even specified that Holmen had been sentenced to 21 years in prison for these fictional crimes. What made the hallucination particularly unsettling was that ChatGPT correctly identified several personal details about Holmen's life, including the number and gender of his children, their approximate ages, and his hometown. This accurate personal information appeared alongside completely fabricated criminal allegations.
Legal Response and GDPR Implications
Following this disturbing incident, Holmen contacted privacy rights advocacy group Noyb, which has now filed a formal complaint with Datatilsynet, the Norwegian Data Protection Authority. The complaint alleges that OpenAI violated the General Data Protection Regulation (GDPR), specifically Article 5(1)(d), which requires companies processing personal data to ensure its accuracy. When data is inaccurate, the regulation mandates that it must be corrected or deleted.
The Persistence Problem
While ChatGPT's underlying model has since been updated and no longer repeats these defamatory claims about Holmen, Noyb argues this doesn't resolve the fundamental issue. According to their complaint, the incorrect data may still remain part of the large language model's dataset. Since ChatGPT feeds user data back into its system for training purposes, there's no guarantee that the false information has been completely purged from the model unless it undergoes complete retraining. This uncertainty continues to cause distress for Holmen, who has never been accused or convicted of any crime.
Broader Compliance Issues
Noyb's complaint also highlights a broader issue with ChatGPT's compliance with Article 15 of GDPR, which grants individuals the right to access their personal data. The nature of large language models makes it virtually impossible for users to see or recall all data about themselves that might have been incorporated into the training dataset. This fundamental limitation raises serious questions about whether AI systems like ChatGPT can ever fully comply with existing data protection regulations.
OpenAI's Limited Response
Currently, OpenAI's approach to addressing these kinds of hallucinations appears minimal. The company displays a small disclaimer at the bottom of each ChatGPT session stating, ChatGPT can make mistakes. Consider checking important information. Critics argue this is woefully inadequate given the potential harm that can result from AI-generated falsehoods, particularly when they involve serious criminal allegations against identifiable individuals.
![]() |
---|
OpenAI acknowledges potential errors in ChatGPT responses despite the serious consequences they may have |
The Path Forward
Noyb is requesting that the Norwegian Data Protection Authority order OpenAI to delete the inaccurate data about Holmen and ensure ChatGPT cannot generate similar defamatory content about others in the future. However, given the black-box nature of large language models, implementing such safeguards presents significant technical challenges. As AI systems become increasingly integrated into daily life, this case highlights the urgent need for more robust regulatory frameworks and technical solutions to prevent AI hallucinations from causing real-world harm to individuals.