One of the European Parliament’s flagship child protection projects that are currently being prepared is the infamous Child Sexual Abuse Regulation (CSAR), or as its critics like to call it, Chat Control, as it purports the mass surveillance of digital correspondence across the European Union, regardless of potential violations of citizens’ fundamental rights to privacy.
To understand what Chat Control could truly mean for Europe, we interviewed (perhaps) its loudest in-house critic, MEP Patrick Breyer of the European Pirate Party. As a legal expert and judge by profession, Mr. Breyer has long been fighting for digital freedom and fundamental rights, both in and outside the Parliament—of which he is a member since 2019, putting his expertise to use in both the Civil Liberties, Justice, and Home Affairs (LIBE) and the Legal Affairs (JURI) committees.
First, because I don’t think too many of our readers are familiar with it, can you tell me about the European Pirate Party? What is it and what does it fight for?
Pirates strive to protect fundamental rights in the digital age. We are experts in digital technology, but also in critical thinking. We are not afraid of digital technologies, since we are very much aware of the opportunities and what you can do with them, but we are not cheerleaders either, as the Commission is. We don’t find everything that you can do with digital technology great—we understand the risks and the limits. And especially in the case of fundamental rights, digital technologies have the potential to create an oppressive surveillance state. And that’s what we’re trying to prevent. There are four Pirate Party members in the European Parliament, three from the Czech Republic and myself from Germany. And of course, we’re hoping to grow stronger in the upcoming elections.
Where is the Pirate Party situated in the European Parliament?
After lengthy discussions, we decided to join the Greens-European Free Alliance (EFA) group, because we found that it’s currently best aligned with our policies, especially when it comes to fundamental rights in the digital age. We have a bit of work sharing, which means that many of the digital files are covered by us on behalf of the group, which also includes the Chat Control file.
Chat Control, officially known as the Child Sexual Abuse Regulation (CSAR), is currently one of the hottest topics in the EP’s civil liberties commission, although few people are familiar with it outside of Brussels. What is it about the proposal that you think should be concerning for all Europeans?
First of all, this proposal is unprecedented in the free world, insofar in that it makes it mandatory for communication services (including email, messenger services, chat services, video conferencing apps, and even phone calls) to scan the content of our private communications for potentially illegal material using algorithms that are totally inaccurate. This will result in revealing many legally private conversations, and even intimate images, to a designated EU authority that is to be newly created, and then to police authorities. They will also be flagged to the providers themselves.
So, basically, a lot of people would be looking into our private communications based on error-prone scanning. What’s worse, this would be applied indiscriminately to non-suspects, to people who have nothing at all to do with child sexual abuse online. This system doesn’t exist anywhere in the free world. Not in the United States, not in the United Kingdom, or any other democracy. China has a similar system in place where all private communications are scanned and monitored.
What’s also being proposed is an age verification requirement for all communication services, meaning that you can no longer sign up for an email account or a chat service anonymously. When you need to prove your age, you actually need to either present an identification or you need to present your face. And that means, for instance, that whistleblowers, or political activists who fear prosecution by the government, such as Edward Snowden, would no longer be able to trust that they can communicate anonymously. And, by the way, reporting of child sexual abuse also takes place anonymously in many cases. So this can actually make reporting more difficult.
And then another important part of the proposal is that minors would no longer be allowed to install apps that can be used for child grooming. This means that any communications app (including WhatsApp), any game that allows for communicating with other players, and a huge range of apps that allow for some communications function would be blocked for young people under 18. That is gross interference with the rights of parents, who should be able to discuss and decide (with their children) how to stay safe online. Also, parents should be able to communicate with their children. It’s really incredible what is being proposed. There is more to it, but I’ll leave it at that.
What is interesting is that despite this legislation having such profound consequences, very few Europeans know that Chat Control is being actively discussed in the European Parliament right now. But even among those who do, it’s hard to imagine a high level of public support. What do the stakeholders (NGOs, child protection agencies) and the kids themselves think about it?
The reason there is so little discussion about it is that European citizens are actually not being told the truth about the proposal and its devastating consequences. The EU Commission and the responsible Commissioner for Interior Affairs, Ylva Johansson, have repeatedly been called out by independent fact-checkers and others for spreading disinformation on this proposal. I think if citizens were aware of Chat Control, the debate would be tremendous.
As for stakeholders, this proposal managed to divide child protection organizations, abuse victims, political groups, and others, into those who believe that as much surveillance as possible will make the internet the safest place and will make the EU a world leader in protecting children, and those who say that this will actually make the EU a world leader in surveillance instead while even harming children by making the prosecution of child sexual abuse more difficult.
As for the views of the children themselves, the answer is even more straightforward. A few months ago, we commissioned a representative poll by Episto, and the European children were very clear that they think scanning communications is not the best approach to making the internet safer for them. They also don’t think that age verification or excluding them from using certain apps is the most effective approach. Instead, they say that the best and most appropriate approach would be to teach them about the risks and the strategies on how to be able to defend against child grooming, for instance. They say that providers should make sure it’s easy to report incidents and that we should make sure that these are taken seriously and followed up.
Chat Control could also criminalize the children themselves because many of them engage in consensual, voluntary sexting with peers. Many of these investigations turn out to be targeting teenagers who post their own images or content they believe is funny. And the kids also said in the poll that they would probably find an adult to circumvent any age controls if there were any, so this wouldn’t be effective either.
During last week’s debate in the Parliament, the rapporteur on the file, Mr. Zarzalejos (EPP) said that even calling the legislation ‘Chat Control’ is a dangerous downplaying of the situation which could harm children. It is obvious that child protection is a priority for all European parties, but why are there seemingly so few who recognize the dangers of this approach?
First of all, indeed, there is no difference in opinion about the risks of the internet and the need to protect children. Everyone totally agrees with the aim of the legislation. We are only divided on how to achieve that goal.
The end goal is not all that needs to be considered with regard to a proposal. Before voting on a solution, we need to ask if we are actually creating way more harm than good, both for the children, the victims, and the citizens in general, and both for the economic actors and governments who also rely on safety encryption, by the way. That’s why I’m saying that we need to use a different approach to make this proposal really effective and also bring it in line with fundamental rights.
Only three weeks ago, a legal study was presented to the LIBE committee on this proposal, which said very clearly that according to the jurisprudence of the Court of Justice, untargeted detection orders cannot hold up in the courts. This means that if they go ahead with indiscriminate scanning, this legislation will be struck down by the courts and won’t do any good for children. But if we design this in a clever way, it can actually be much more effective.
What would be your recommendations to make it an effective measure?
For example, scanning only suspects with a court order. No age verification that removes anonymity online, but instead design all services to be privacy friendly—your profile should not be public unless you explicitly agree to it; that should go for all users. Your location should not be automatically broadcasted unless you explicitly request that. When someone gets in touch with you and wants to speak to you, you should be asked if you want to have a conversation or not, meaning that the messages should not be shown automatically, but users should have to consent to starting a conversation.
We also need to identify public channels with a high risk of grooming and begin moderating them. Also, the law will create a new EU authority for protecting children. I could imagine this authority having operatives who sign up for these services and pretend they are kids to test the risk of grooming and report criminals to the police if they come across any grooming attempts. So why don’t we use these targeted methods instead of implementing measures that go against everyone who uses these services legitimately?
And on top of that, there is actually nothing in the proposal that would require service providers to remove child sexual exploitation material. The removal obligation is completely missing. In Germany, there was an investigation into an online platform called Boystown, even Europol was involved in that. But they only switched off the service without actually reporting the archives that the perpetrators had uploaded to hosting services. There were terabytes full of terrible videos and images, but they never bothered to have them removed, meaning that even after this forum was disabled, the illegal material continued to circulate.
There is so much missing in this proposal that needs adding, in terms of secure design of services and removal obligations and in terms of what this new EU authority should do to coordinate, to exchange best practices, to support law enforcement, and to support victims. If we add the missing parts and if we remove the harmful and counterproductive parts, this legislation will not only be compliant with fundamental rights, but it would actually be more effective and able to truly protect children.
That’s why I don’t see a contradiction between safeguarding privacy and safety. I think that both go hand in hand if we use the right approach.
If all goes by the book, Chat Control could be implemented within months. Do you see any chance of putting in all the appropriate safeguards in this very limited time we have left?
The rapporteur has a very ambitious timetable indeed; he wants this implemented before the end of the year. There is a risk that this could happen because governments are very supportive and we don’t see a lot of debate in many EU member states.
On the other hand, we’ve recently seen members of the European Parliament, who are normally not very much involved in matters of personal privacy, speak out against the proposal—both conservatives and liberals. Some MEPs are from groups where the shadow rapporteur (the negotiator on the file) is actually in favor of the proposal, and yet their colleagues choose to speak against it. So we have division within party groups, which is encouraging.
But then again, the way things are now, I don’t see a majority for preventing this kind of mass surveillance. I think that the provisions to backdoor even end-to-end encrypted messages will be removed from the proposal, but I don’t yet see a majority willing to remove the mass scanning of non-suspects or the destruction of anonymous communications by mandatory age verification.
When it comes to protecting children, many are willing to go to very extreme ends, because they don’t understand that children would actually suffer from this proposal much more than they stand to gain. Criminalization; being cut off from communication services; losing spaces for confidentially seeking help, advice, and therapy. All these harm children while pushing perpetrators to underground channels that can no longer be subjected to targeted investigations. All that would be counterproductive and harm children. That’s what many of the proponents struggle to understand.
Which member states do you see putting up any kind of resistance during the Council negotiations?
Well, for one, Austria is speaking out against the proposal. In Germany, the government opposes parts of it. And in Sweden, there has been a discussion lately, during the Swedish presidency, with one VPN provider (Mullvad VPN) even buying billboards all over the country to inform the public about what is proposed and to call on politicians not to turn us into a surveillance state. Conservatives, liberals, and greens have all criticized the proposal in Sweden. So, there is a very lively, very critical public debate in Sweden—including Commissioner Johansson, who is also Swedish. And I hope that we will see the same in many other member states, including Spain, of course, which will take over the presidency later this year.
Is there any other impending EU legislation that the Pirate Party is focusing on right now, that you think could be dangerous to personal privacy or freedom of expression?
In my opinion, the European Commission is proposing an entire avalanche of legislation with the aim to exploit our personal data and use it for government and industry purposes while setting aside our informational self-determination, our right to decide ourselves who knows about our private lives.
There is a discussion about the European digital identity, for instance. But what I’m currently most focused on—apart from Chat Control—is the European Health Data Space proposal. This is a proposal to collect all health records of all EU citizens and make them available to all health and treatment institutions, even if they don’t have a need to access this information. According to the proposal, for example, your dentist will be able to consult your psychotherapy files. But also, they propose to make your health records available for the so-called secondary use by industry, researchers, and even by government authorities for policy making. And this includes personally identifiable health information about very intimate illnesses, treatments for violence, psychotherapy, abortion, fertility issues, and so on. Everything would be on the record and would be shared. I have great concerns about the proposal, and I’m fighting to ensure that patients can decide which information they want to share.
Let me ask a more hypothetical question at last. In your opinion, do Chat Control and similar legislations run the risk of being used for other, less benevolent purposes in the future? Personal communication data can be very easily abused for political or ideological censorship, or identifying people with contrarian opinions and cracking down on the freedom of expression. It seems to me that if the legislation is in place, it could not only be used for preventing child abuse, but for virtually anything else.
There’s definitely a huge risk of abuse. We have already seen that materialize in the Pegasus scandal, where governments have been basically spying on political opponents. And of course, with Chat Control, they would only need to insert a different hash code to dig up other material. The Chinese government is doing that right now. They are scanning for certain keywords that would indicate making fun of their president or discussing pro-democracy protests, for instance.
So, there’s a huge risk that this will be used for other purposes, step by step; but also, that this could be extended. Because if you accept the principle that all private spaces should be scanned, then the next step will be using this on the entirety of our smartphones. Not only on communications but also on photos that we store on our own devices and other types of data. Also, on our computers, the operating systems could be obliged to scan us. The post offices could also be obliged to open all letters and scan them—because that’s what compares to Chat Control, the post office opening and scanning our letters. And even, we could imagine, that mandatory AI camera systems would be installed in private apartments that would make a report to the police if it detects certain sounds (like a gunshot) or certain keywords in conversations.
So, this process could easily create an oppressive system as we’ve never known before. And all along the same arguments, that crime is happening and we need to eradicate it. But I’m convinced that our society would be much less safe with these systems in place. We would no longer be safe even from providers, from police, or our governments. I think the existence of private spaces is something that every person needs for their mental sanity. It is needed for political activism, for journalists and whistleblowers, but it’s also needed for children and abuse victims for seeking help, advice, and support, and even for reporting to the police.
Private spaces are an essential safety factor and contribute much more to public safety than if everything was under the control of the government or some algorithm. Even today, you are not safe in regimes where there is absolute control. Are you safe in China? Did you live safely under the Nazi regime? Of course not. That’s why we don’t want to get there. Hopefully, if enough people become aware of it, we won’t have to.