Covadonga Torres Assiego has a Ph.D. in law and is an analyst specializing in artificial intelligence, fundamental rights, and technological governance in a European context. From a critically-minded academic perspective, she has closely followed the process that has led the European Union to become the first major regulatory power to comprehensively regulate artificial intelligence through the so-called AI Act.
European AI regulation was born with a clear ethical and rights-based vocation but has also sparked an intense debate over its collateral effects: legal uncertainty, a brake on technological entrepreneurship, a geopolitical disadvantage vis-à-vis powers such as the United States or China, and internal contradictions between the discourse of citizen protection and the advance of digital control mechanisms.
In this interview, Torres Assiego analyzes without concessions the limits and contradictions of the European regulatory framework, the risk of an ‘algocracy’ (also known as ‘government by algorithm’) lacking real democratic oversight, the inconsistency between the AI Act and proposals such as Chat Control, and Europe’s difficult positioning in a technological race that is already being fought in terms of global power.
The European Union was the first major region to approve comprehensive regulation of artificial intelligence, despite lacking its own major technological powers. How do you assess this step?
It has been an enormously ambitious project—perhaps too ambitious. In terms of intentions, there is indeed a genuine concern for fundamental rights and ethical limits, something that was already anticipated in the Commission’s White Paper. The problem is that this regulatory ambition has had very negative consequences for entrepreneurship in Europe. The classification of systems as high-, medium-, or low-risk is confusing and imprecise, and it generates fear and legal uncertainty, especially among small companies that want to innovate.
Before regulating, it would be useful to clarify the concept. What exactly are we regulating when we talk about artificial intelligence?
Artificial intelligence is a technology that allows machines to perform tasks that traditionally required human reasoning. It has no consciousness, but it learns from data, recognizes patterns, understands language, and makes decisions. In the case of generative AI, the key element is algorithmic training: it learns from interaction with the user. A common example is how a platform learns your preferences based on your interactions and adjusts the content it shows you. That is already artificial intelligence in operation.
There is an almost futuristic perception of assistants that know us better than we know ourselves. Is that something that can be regulated?
Partly yes, but the law always lags behind. AI is disruptive and advances in ever-larger leaps. Regulating something that is not fully understood is very difficult. Moreover, artificial superintelligence—when it clearly surpasses human intelligence—raises questions for which there is no realistic legal response. What can and must be done is to prevent clear abuses, such as the generation of illegal or deeply harmful content.
From an academic standpoint, how is Europe’s position in this race perceived?
There is no unanimous view. Ethically, it constitutes an advance. In terms of fundamental rights, as well. But geopolitically, we are far behind the United States, China, or even Russia. In addition, there is a serious inconsistency: mass surveillance is prohibited in the AI Act, while Chat Control is promoted at the same time. That is not coherent. In a tripolar world governed by the law of the strongest, Europe is imposing limits on itself that its rivals do not share.
One recurring argument is that regulation protects the citizen. Is that really the case?
On paper, there are tools, but the regulation has not even been fully implemented, and not even the European regulators agree among themselves. Citizens, moreover, are not aware of the real risks. We see constant algorithmic polarization and manipulation of consumption and behavior. Algorithms are not neutral; they respond to economic and political interests. The key question is to what extent our decisions remain truly free.
Who controls the controller in an increasingly algorithmic society?
That is the core problem. We are entering an ‘algocracy.’ There is a dangerous fallacy of algorithmic authority: it is assumed that whatever an AI says is true. It is not. Models make mistakes constantly and reproduce biases. That is why I insist on the importance of the analog world, of critical thinking, and of reducing dependence on social media, which are ultimately systems for manipulating behavior.
What role does Chat Control play in this context?
It is particularly worrying. Even if it is presented as voluntary, in practice, citizens will have no alternative if major platforms adopt it. Moreover, it introduces a form of surveillance that is incompatible with the principles the regulation itself claims to defend. Citizens are left trapped between discourses of protection and practices of control.
In journalism and academia, there is frequent talk of demanding political accountability. Is that possible?
It is difficult, but essential. We must always turn to primary sources, to legislation, to serious academic studies, and to media with different orientations. The algorithm feeds our confirmation bias. It is also healthy to disconnect partially from the digital environment. We are turning the internet into a prison for will and privacy.
In geopolitical terms, how do you see the EU compared with the United States and China?
Europe’s greatest strength is its theoretical defense of fundamental rights. But its weaknesses are far greater: lack of regulatory clarity, internal contradictions, and a huge strategic disadvantage. The United States, China, Russia, and even the Persian Gulf are investing massively in AI applied to defense and security. China, moreover, trains its citizens in AI from a very early age. Europe is falling behind on too many fronts.
Do you see real potential in Europe in the medium term?
There is potential. We have cases such as ASML in the Netherlands, with a global technological monopoly. But these are exceptions. We need to genuinely support small and medium-sized entrepreneurship, reduce the fear of sanctions, and review the regulations. Right now, large companies can absorb fines; small entrepreneurs cannot. And it is precisely those small entrepreneurs who sustain Europe’s economic fabric.
To conclude, what do you expect from the immediate future?
I hope Europe wakes up, revises its approach, and understands that protecting rights is not incompatible with competing. If regulatory excesses are not corrected, we will continue to lose ground. Today, the regulation has more defects than virtues. And time, in this race, is not on our side.


