Even though EU lawmakers worked for months to finalize the new Artificial Intelligence Act, the legislation seems to be no match for ChatGPT, the AI world’s most elusive language model, that keeps lawmakers on the edge about how to properly regulate it, Politico reported.
The European Commission’s AI rulebook which was supposed to bring order into the chaos of what this new technological frontier represents for users and businesses seems to have become outdated before it even turned into law. The Artificial Intelligence Act was proposed in 2021 and designed to protect customers from potential overreach, banning AI applications such as social scoring, manipulation, and even some instances of facial recognition, while also designating specific uses of AI as “high-risk,” forcing developers to abide by stricter transparency and safety regulations.
But OpenAI’s ChatGPT is different. It is simply too advanced—holding the potential for unimaginable amounts of different applications—to simply categorize as good or bad. The language model has virtually unlimited uses, from writing essays, articles, songs, and poems to devising intricate business strategies, writing sales copy, computer codes, policy briefs, or even highly plausible misinformation materials. In a matter of seconds, mind you. But at the end of the day, ChatGPT is still just a machine, without being able to grasp ethical boundaries on its own.
The latest version of the AI Act’s draft was approved by the Council in December, and EU leaders were convinced they had managed to establish all the vital cybersecurity, transparency, and risk-management requirements to fit the so-called general-purpose AIs, such as ChatGPT. But at the time, the model had only begun its public trial phase, and it appears to be evolving by the day.
In February, the lead lawmakers on the AI Act, Dragoş Tudorache (Renew) and Brando Benifei (S&D), called for yet another redraft, to include AIs that generate text without human oversight in the “high-risk” category, as to limit the potential for using the model for extensive disinformation campaigns. Others, mostly center-right MEPs, however, criticized the idea, saying that it would classify dozens of harmless activities as risky too, hindering the general application of the bot.
Activists, on the other hand, fear that no regulation could be harsh enough. “It’s not great to just put text-making systems on the high-risk list: you have other general-purpose AI systems that present risks and also ought to be regulated,” said Mark Brakel, director of policy at the Future of Life Institute, a nonprofit watchdog focusing on AI technology.
Part of the solution proposed by the two MEPs in charge of regulation was putting in place a clear distinction between large service providers and everyday users when it comes to adhering to transparency rules. Regulating Big Tech’s use of AI is anticipated to be one of the hottest topics for years to come, as social media giants have not always subscribed to the principles of freedom of information and freedom of expression in the past, a problem that can reach catastrophic heights if biased AI systems get incorporated into our everyday platforms.
The EU institutions are expected to start negotiating on the final version of the AI Act in April, but it’s already clear that it won’t be an easy fight. As the Commission, the Parliament, and the Council will engage in a three-way debate to solve this conundrum, Big Tech will certainly be present on the sidelines trying to steer the conversation to its benefit. As one of the prominent transparency watchdogs, Corporate Europe Observatory’s recent investigation revealed, tech companies like Google and Microsoft have been stepping up their lobbying at the EU to get lawmakers to exclude general-purpose AIs from the high-risk category. The move would certainly benefit them but the consequences for everyone else cannot be fully grasped for now.
Artificial Intelligence represents a technological breakthrough that today’s lawmakers simply cannot prepare for but only watch from the sidelines as history happens, trying to adapt their policies on the move. By the time the AI act will be finally put into law, it might very well be too late again. We might as well just get used to it.