OpenAI Faces Privacy Complaint in Norway Over ChatGPT's Defamatory 'Hallucinations'
OpenAI, the organization behind the AI chatbot ChatGPT, is confronting a privacy complaint in Norway due to the chatbot generating false and defamatory information about an individual. This incident highlights ongoing concerns […] Thank you for being a Ghacks reader. The post OpenAI Faces Privacy Complaint in Norway Over ChatGPT's Defamatory 'Hallucinations' appeared first on gHacks Technology News.

OpenAI, the organization behind the AI chatbot ChatGPT, is confronting a privacy complaint in Norway due to the chatbot generating false and defamatory information about an individual. This incident highlights ongoing concerns regarding the accuracy and reliability of AI-generated content.
A Norwegian individual discovered that ChatGPT produced fabricated information alleging he had been convicted of murdering two of his children and attempting to kill a third. These unfounded claims have caused significant distress and potential reputational harm to the individual involved.
The privacy advocacy group NOYB (None of Your Business) is supporting the affected individual by filing a complaint with Norway's data protection authority, Datatilsynet. NOYB argues that OpenAI's ChatGPT violates the General Data Protection Regulation (GDPR) by producing and disseminating inaccurate personal data. Joakim Söderberg, a data protection lawyer at NOYB, stated, "The GDPR is clear. Personal data has to be accurate. If it’s not, users have the right to have it changed to reflect the truth."
Under the GDPR, organizations are obligated to ensure the accuracy of personal data they process. The regulation grants individuals the right to rectify inaccurate data concerning them. In this case, ChatGPT's generation of false information about the complainant could be seen as a breach of these provisions. Confirmed violations of the GDPR can result in penalties of up to 4% of a company's global annual turnover.
This is not the first time ChatGPT's inaccuracies, often referred to as "hallucinations," have led to legal challenges:
- In 2023, an Australian mayor considered legal action after ChatGPT falsely claimed he had been imprisoned for bribery.
- In 2024, Italy's data protection authority fined OpenAI €15 million for processing personal data without a proper legal basis.
- In the United States, a defamation lawsuit was filed against OpenAI after ChatGPT fabricated legal accusations against a radio host.
These incidents underscore the broader issue of AI-generated misinformation and its potential legal ramifications.
OpenAI has acknowledged that ChatGPT can produce inaccurate information and has implemented disclaimers advising users to verify the chatbot's outputs. However, critics argue that such disclaimers are insufficient to mitigate the harm caused by false information. If the Norwegian data protection authority finds OpenAI in violation of the GDPR, the company could face substantial fines and be required to implement measures to prevent future inaccuracies.
The complaint filed in Norway adds to the growing scrutiny of AI systems like ChatGPT and their adherence to data protection laws. As AI technology continues to evolve, ensuring the accuracy and reliability of AI-generated content remains a critical challenge for developers and regulators alike.
Source: TechCrunch
Thank you for being a Ghacks reader. The post OpenAI Faces Privacy Complaint in Norway Over ChatGPT's Defamatory 'Hallucinations' appeared first on gHacks Technology News.