The writer, a lawyer at Bredin Prat in Paris, was previously counsel for Twitter UK
Social media moderation and its impact on democracy is again in the spotlight following US president Donald Trump’s controversial posts on Twitter and Facebook regarding the George Floyd protests. Twitter hid a particularly offensive post behind a warning label and Facebook proactively announced that it would do nothing.
The timing couldn’t be better for those in the EU who want to impose more rules and tougher sanctions on online operators deemed too slow at removing illegal content from their platforms. This is especially so in France, where a highly contentious new law will take effect if it survives a last-ditch constitutional challenge.
The Loi Avia, named after the politician responsible for drafting it, Laetitia Avia, expands the scope of the existing legal regime for online content removal and imposes draconian timeframes for taking down posts. It incentivises companies zealously to remove distressing but not necessarily illegal content, to avoid criminal prosecution and hefty administrative fines.
No less than 14 categories of content-related offences are covered, ranging from the most disturbing (the dissemination of child pornography and glorifying terrorism), to offences such as disseminating pornography that could “be perceived” by a minor.
Companies that receive a takedown notice concerning allegedly illegal content will have either one or 24 hours to remove the material. Which deadline applies depends on who has made the request, the nature of the content and whether the company is a “host”, “platform” or “website publisher”. The line between host and platform is very fine. Social media companies will be subject to the rules for both.
The law explicitly targets foreign companies: all platforms with more than a specified number of users located in France must comply. The threshold, yet to be determined, will undoubtedly be set to ensure that all the big players are caught. Search engines are also within the law’s ambit.
The most serious flaw is the law’s likely impact on free expression. French senators who have appealed to the country’s Constitutional Council argue that requiring platforms to assess whether vastly different types of content are “manifestly illegal”, in a short timeframe and without the possibility of extension, risks “over-censorship”. This is not far fetched. Just last year, the French authority empowered by the law to demand one-hour takedowns ordered Google+ to remove a post depicting the French president and his prime minister as dictators. Google refused. But with less time to evaluate context and faced with the threat of criminal prosecution, the outcome might be different.
Over-censorship is a particular risk if operators rely more on machine learning to remove borderline content before it becomes a liability. Companies that fail to remove content quickly enough face fines of up to €1.25m; their directors and employees can be fined and even imprisoned. Companies that breach the law’s new internal compliance rules also face GDPR-like administrative fines of up to €20m, or 4 per cent of total annual global revenues.
The country’s Higher Audiovisual Council also has new powers to examine “the principles and methods of conception of the algorithms and the data used by these algorithms.” Exercise of this power will certainly be challenged.
Social media companies have improved their responsiveness and transparency and are developing other approaches to dealing with offensive content, such as labelling it instead of deleting it. Further regulation seems unavoidable. The European Commission is in the process of preparing its Digital Services Act, which will tackle the same issues, and more, for the entire EU. The commission, which questioned the compatibility of the French law with the EU ecommerce directive and concluded that it jeopardises the right to freedom of expression, suggested that France and other member states should hold off on adopting new laws in this area. Nevertheless, the French government pushed ahead.
Recent events in the US demonstrate why a one-remedy-fits-all approach to content moderation is bad for society: it can lead to the suppression of information crucial to assessing the health of our institutions. The French Constitutional Council should strike down the Loi Avia and the EU should take the opportunity to propose a nuanced legislative solution that addresses the complexities at play and protects all users’ rights.
Letter in response to this article:
Get alerts on Technology sector when a new story is published