Adopted last year as benchmark EU legislation, the Digital Services Act (DSA) already forces social media platforms and search engines to remove illegal and harmful content in Europe, especially what it classifies as mis- and disinformation related to democratic and electoral processes. Now, with the EU elections—and a dozen national ones—rapidly approaching, Brussels is busy ensuring that every company in Silicon Valley knows what its duty is.
Attempts at regulating online content as a means of controlling the political agenda should concern Europeans, under the old adage “never give the government more power than you would want your worst enemy to have.”
On Tuesday, March 26th, the European Commission released a lengthy document entitled “Guidelines for providers of VLOPs and VLOSEs on the mitigation of systemic risks for electoral processes,” meant to be implemented as prescribed or replaced by equally effective measures.
The alphabet soups in the title refer to “very large online platforms” (such as social media giants, like Facebook, Instagram, or TikTok) and “very large online search engines” (like Google), defined as providers with an average monthly active user-base of at least 45 million in Europe.
The guidelines instruct social media platforms to multiply their usual moderation efforts during election periods—it’s “recommended” to start six months before an election and last until at least a month after—by setting up specialized monitoring and risk assessment teams including experts with country-specific knowledge, such as local, independent fact-checkers and civil society organizations.
The European Conservative recently analyzed at length the true “independence” of these fact-checkers by exploring some of Meta’s main local partners in EU countries. These organizations, the vast majority of which demonstrate substantial leftist bias, are not only paid millions to flag and remove content that they deem harmful, but have now been given a set of highly potent AI search tools to make their job a lot more effective.
EU’s new guidelines demand practically all other platforms follow Meta’s example by setting up their own expert teams and employing “professional” fact-checkers. During the press conference on Tuesday, a Commission official said that country-specific risk assessment is obligatory under the DSA, and platforms should check if they have sufficient moderators on the ground with adequate local knowledge.
The Eurocrats noted that the Commission is already investigating Elon Musk’s X for possible non-compliance, as the platform might not employ enough content moderators to sufficiently address the risks.
The document also recommends applying election-specific measures to counter disinformation, customized for each election and country. These might include adjusting the algorithms to promote “official” electoral information and to reduce the visibility of or hide alternative or so-called harmful sources. Political ads, as well as all AI-generated content, must always be clearly labeled as such. Preventing the dissemination of harmful content should also be done with the cooperation of local law enforcement agencies.
The document makes it clear that all these measures should apply to “legal but harmful” content that “can influence voters’ behavior”, while leaving the rest to the imagination of the platforms. Europeans should brace themselves in the knowledge that the measures will be used to justify widespread politically motivated abuse in the name of democracy.
It is all eerily similar to a new German law now awaiting enactment, which aims to employ civil society organizations to police the internet for “non-punishable”—otherwise known as legal—hate speech. In other words, Berlin wants to disburse hundreds of millions in public funding with the mandate to flag and remove pro-AfD content for allegedly promoting “far-right extremism,” backed by the practically identical actions of the European Union.