Last year, the EU introduced the Digital Services Act (DSA), an ambitious and thoughtful project to rein in the power of Big Tech and give European internet users more control over their digital lives. It was an exciting moment, as the world’s largest trading bloc seemed poised to end a string of ill-conceived technological regulations that were both ineffective and incompatible with fundamental human rights.
We were (cautiously) optimistic, but we didn’t kid ourselves: the same bad-idea-havers who convinced the EU to mandate over-blocking, under-performing, monopoly-preserving copyright filters would also try to turn the DSA into yet another excuse to subject Europeans’ speech to automated filtering.
We were right to worry.
The DSA is now steaming full-speed-ahead on a collision course with even more algorithmic filters – the decidedly unintelligent “AIs” that the 2019 Copyright Directive ultimately put in charge of 500 million peoples’ digital expression in the 27 European member-states.
Copyright filters are already working their way into national law across the EU as each country implements the 2019 Copyright Directive. Years of experience have shown us that automated filters are terrible at spotting copyright infringement, both underblocking (permitting infringement to slip through) and overblocking (removing content that doesn’t infringe copyright) – and filters can be easily tricked by bad actors into blocking legitimate content, including (for example) members of the public who record their encounters with police officials.
But as bad as copyright filters are, the filters the DSA could require are far, far worse.
Current proposals for the DSA, recently endorsed by an influential EU Parliament committee, would require online platforms to swiftly remove potentially illegal content. One proposal would automatically make any “active platform” potentially liable for the communications of its users. What’s an active platform? One that moderates, categorizes, promotes or otherwise processes its users’ content. Punishing services that moderate or classify illegal content is absurd – these are both responsible ways to approach illegal content.
These requirements give platforms the impossible task of identifying illegal content in realtime, at speeds no human moderator could manage – with stiff penalties for guessing wrong. Inevitably, this means more automated filtering – something the platforms often boast about in public, even as their top engineers are privately sending memos to their bosses saying that these systems don’t work at all.
Large platforms will overblock, removing content according to the fast-paced, blunt determinations of an algorithm, while appeals for the wrongfully silenced will go through a review process that, like the algorithm, will be opaque and arbitrary. That review will also be slow: speech will be removed in an instant, but only reinstated after days, or weeks,or 2.5 years.
But at least the largest platforms would be able to comply with the DSA. It’s far worse for small services, run by startups, co-operatives, nonprofits and other organizations that want to support, not exploit, their users. These businesses (“micro-enterprises” in EU jargon) will not be able to operate in Europe at all if they can’t raise the cash to pay for legal representatives and filtering tools.
Thus, the DSA sets up rules that allow a few American tech giants to control huge swaths of Europeans’ online speech, because they are the only ones with the means to do so. Within these American-run walled gardens, algorithms will monitor speech and delete it without warning, and without regard to whether the speakers are bullies engaged in harassment – or survivors of bullying describing how they were harassed.
EU institutions have a long and admirable history of attention to human rights principles. Regrettably, the EU legislators who’ve revised the DSA since its introduction have sidelined the human rights concerns raised by EU experts and embodied in EU law.
For example, the E-Commerce Directive, Europe’s foundational technology regulation, balances the need to remove unlawful content with the need to assess content to evaluate whether removal is warranted. Rather than establishing a short and unreasonable deadline for removal, the E-Commerce Directive requires web hosts to remove content “expeditiously” after they have determined that it is actually illegal (this is called the “actual knowledge” standard) and “in observance of the principle of freedom of expression.”
That means that if you run a service and learn about an illegal activity because a user notifies you about it, you must take it down within a reasonable timeframe. This isn’t great – as we’ve written, it should be up to courts, not disgruntled users of platform operators, to decide what is and isn’t illegal. But as imperfect as it is, it’s far better than the proposals underway for the DSA.
Those proposals would magnify the defects within the E-Commerce Directive, following the catastrophic examples set with German’s NetzDG and France’s Online Hate Speech Bill (a law so badly constructed that it was swifty invalidated by France’s Constitutional Council) and set deadlines for removal that preclude any meaningful scrutiny. One proposal requires action within 72 hours, and another would have platforms remove content within 24 hours or even within 30 minutes for live-streamed content.
The E-Commerce Directive also sets out a prohibition on “general monitoring obligations” – that is, it prohibits Europe’s governments from ordering online services to spy on their users all the time. Short deadlines for content removals run afoul of this prohibition and cannot help but violate freedom of expression rights.
This ban on spying is complemented by the EU’s landmark General Data Protection Regulation (GDPR) – a benchmark for global privacy regulations – which stringently regulates the circumstances under which a user can be subjected to “automated decision-making” – that is, it effectively bans putting a user’s participation in online life at the mercy of an algorithm.
Taken together, a ban on general monitoring and harmful and non-consensual automated decision-making is a way to safeguard European internet users’ human rights to live without constant surveillance and judgment.
Many proposals for DSA revisions shatter these two bedrock principles, calling for platforms to detect and restrict content that might be illegal or that has been previously identified as illegal, or that resembles known illegal content. This cannot be accomplished without subjecting everything that every user posts to scrutiny.
The DSA can be salvaged.It can be made to respect human rights, and kept consistent with the E-Commerce Directive and the GDPR. Content removal regimes can be balanced with speech and privacy rights, with timeframes that permit careful assessment of the validity of takedown demands. The DSA can be balanced to emphasize the importance of appeals systems for content removal as co-equal with the process for removal itself, and platforms can be obliged to create and maintain robust and timely appeals systems.
The DSA can contain a prohibition on automated filtering obligations, respecting the GDPR and making a realistic assessment about the capabilities of “AI” systems based on independent experts, rather than the fanciful hype of companies promising algorithmic pie in the sky.
The DSA can recognize the importance of nurturing small platforms, not merely out of some fetish for “competition” as a cure-all for tech’s evils – but as a means by which users can exercise technological self-determination, banding together to operate or demand social online spaces that respect their norms, interests and dignity. This recognition would mean ensuring that any obligations the DSA imposes take account of the size and capabilities of each actor. This is in keeping with recommendations in the EU Commission’s DSA Impact Assessment – a recommendation that has been roundly ignored so far.
European regulation is often used as a benchmark for global rulemaking. The GDPR created momentum that culminated with privacy laws such as California’s CCPA, while NetzDG has inspired even worse regulation and proposals in Australia, the UK, and Canada.
The mistakes that EU lawmakers make in crafting the DSA will ripple out all over the world, affecting vulnerable populations who have not been given any consideration in drafting and revising the DSA (so far).
The problems presented by Big Tech are real, they’re urgent, and they’re global. The world can’t afford a calamitous EU technology regulation that sidelines human rights in a quest for easy answers and false quick fixes.
The Advocate General (AG) of the EU Court of Justice today missed an opportunity to fully protect internet users from censorship by automated filtering, finding that the disastrous Article 17 of the EU Copyright Directive doesn’t run afoul of Europeans’ free expression rights. The good news is that the…
On May 12, the UK government published a draft of its Online Safety Bill, which attempts to tackle illegal and otherwise harmful content online by placing a duty of care on online platforms to protect their users from such content. The move came as no surprise: over the past…
The EU copyright directive has caused controversy than any other proposal in recent EU history – and for good reason. In abandoning traditional legal mechanisms to tackle copyright infringement online, Article 17 (formerly Article 13) of the directive introduced a new liability regime for online platforms, supposedly in…
On 12 September 2018, the European Commission presented a proposal for a regulation on preventing the dissemination of terrorist content online—dubbed the Terrorism Regulation, or TERREG for short—that contained some alarming ideas. In particular, the proposal included an obligation for platforms to remove potentially terrorist content within one hour,…
The implementation process of Article 17 (formerly Article 13) of the controversial Copyright Directive into national laws is in full swing, and it does not look good for users’ rights and freedoms. Several EU states have failed to present balanced copyright implementation proposals, ignoring the concerns off EFF,…
While 2019 saw the EU ramming through a disastrous Internet copyright rule that continues to reverberate through legal and policy circles, 2020 was a very different story as the EU introduced the Digital Services Act (DSA), the most significant reform of Europe’s platform legislation the EU has…
After lengthy consultations and many rumors and leaks, the European Commission has released its public draft of the Digital Markets Act (DMA), which, along with the Digital Services Act (DSA,) represents the first major overhaul of EU Internet legislation in the 21st Century. Like the DSA, the DMA…
The European Commission is set to release today a draft of the Digital Services Act, the most significant reform of European Internet regulations in two decades. The proposal, which will modernize the backbone of the EU’s Internet legislation—the e-Commerce Directive—sets out new responsibilities and rules for how Facebook, Amazon,…
Update (10/29/2020): Our discussion of interoperability measures was altered to acknowledge that, while this is contemplated in the leaked document, the mentions are lacking the specificity of other measures under consideration.At the end of September, multiple press outlets published leaked set of antimonopoly enforcement proposals proposed for the a…
The European Union has made the first step towards a significant overhaul of its core platform regulation, the e-Commerce Directive. In order to inspire the European Commission, which is currently preparing a proposal for a Digital Services Act Package, the EU Parliament has voted on three related Reports…
Back to top