After Italy’s recent temporary ban on ChatGPT, an AI chatbot launched by U.S. company OpenAI, over its possible violation of the EU’s general data protection regulation (GDPR), the EU is set to formulate more restrictive regulations against the AI software. The French data protection authority, CNIL, for example, has received complaints that the software is violating data privacy laws.
Because OpenAI has no physical premises in any member state, EU countries can launch their own investigations, leading to national-level regulation, including bans. All the same, several national authorities have signaled their desire for EU-wide coordination. Graham Doyle, a spokesman for the Irish Data Protection Commission, has stated that the latter will coordinate with its equivalents across Europe, and the Belgian data protection authority said that ChatGPT’s potential infringements “should be discussed at a European level.”
The crux of the suspicion regarding OpenAI seems to be that “OpenAI has never revealed what dataset it used to train the AI model underpinning the chatbot,” to the point where even the company’s principal investor, Microsoft, has admitted that it does “not have access to the full details of its vast training data,” as pointed out by Politico.
It now remains to be seen what is uncovered by national-level investigations, and how these will lead to EU regulation on ChatGPT and AI in general.