The Austrian data privacy watchdog NOYB (None of Your Business) has announced it is filing a complaint against the popular AI tool ChatGTP for providing incorrect personal data without any mechanism to correct it.
“ChatGPT keeps hallucinating—and not even OpenAI can stop it,” NOYB said in a statement.
In tech speak, hallucinating is a technical term for when an AI program gives wrong information.
NOYB explained in a statement:
The problem is that, according to OpenAI itself, the application only generates “responses to user requests by predicting the next most likely words that might appear in response to each prompt.” In other words: While the company has extensive training data, there is currently no way to guarantee that ChatGPT is actually showing users factually correct information. On the contrary, generative AI tools are known to regularly “hallucinate,” meaning they simply make up answers.
The group cites a New York Times article that asserts that chatbots—computer programs designed to simulate conversation with humans—provide false information anywhere from 3% of the time to as much as 27%.
When a chatbot ‘hallucinates’ personal data, this violates the EU’s data privacy laws. Under EU law, personal data must be accurate and companies providing personal data must be able to show what data they hold on a person and the source of the information. If proven false, the data must be corrected.
Maartje de Graaf, a data protection lawyer at NOYB, said:
Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences. It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.
The complainant in the NOYB filing asked ChatGTP what his birthday was and the chatbot responded with the wrong date. According to the complaint, OpenAI, the chatbot’s parent company, did not adequately address the request to update the information or provide information about where the data came from. Instead, according to NOYB, the chatbot should have replied that it did not have enough data to answer the question accurately.
The Verge reported last year that OpenAI had updated its policies to facilitate requests to change or delete personal data, but analysts are uncertain whether tiny pieces of data can be extracted and corrected within large, complex language learning models like ChatGTP.
This is far from the first legal challenge OpenAI has faced regarding personal data. The Verge reports that a mayor in Australia has threatened to sue the AI company for defamation because its chatbot falsely claimed he had served time in prison for bribery.
Last year, Italy’s data protection agency temporarily banned the service in the country citing four similar concerns, involving the provision of inaccurate personal information. Additionally, it found that OpenAI failed to notify users of its data collection practices, had no legal justification according to EU law for processing personal data, and did not adequately prevent children under 13 years old from using the service. It ordered OpenAI to immediately stop using personal information collected from Italians to train its chatbot.
According to the Verge, after OpenAI made some superficial changes, ChatGPT was allowed back online in Italy. Spain, France and Germany are also investigating OpenAI’s use of personal data.