Future Tense

France’s New Online Hate Speech Law Is Fundamentally Flawed

What could a better approach look like?

A protective face mask and a bottle of hand sanitizer sit on a desk.
The National Assembly in Paris, on May 12 Gonzalo Fuentes/Getty Images

The solution to online hate speech seems so simple: Delete harmful content, rinse, repeat. But David Kaye, a law professor at the University of California, Irvine, and the U.N. special rapporteur on freedom of expression, says that while laws to regulate hate speech might seem promising, they often aren’t that effective—and, perhaps worse, they can set dangerous precedents. This is why France’s new social media law, which follows in Germany’s footsteps, is controversial across the political spectrum there and abroad.

Advertisement

On May 13, France passed “Lutte contre la haine sur internet” (“Fighting hate on the internet”), a law that requires social media platforms to rapidly take down hateful content. Comments that are discriminatory—based on race, gender, disability, sexual orientation, and religion—or sexually abusive have to be removed within 24 hours of being flagged by users. Content related to terrorism and child pornography must be removed within one hour of flagging. Social media companies could face fines of up to 1.25 million euros (about $1.37 million) if they fail to remove the content on time.

Advertisement
Advertisement
Advertisement

The law has been contentious, especially among legal experts and activists, since it was introduced back in 2018. France’s National Consultative Commission on Human Rights, for instance, didn’t approve the bill, and civil liberties group such as La Quadrature du Net have criticized it for being unrealistic and potentially harmful. “One of the dangers of this law is that it could turn against journalists, activists, and researchers whom it claims to defend,” LQDN told CNN. The group also warned that short removal times and large fines could “lead to targeted campaigns against underrepresented voices.”

Advertisement
Advertisement

Last August, Kaye penned a statement on the draft legislation that laid out many of these concerns. Like LQDN, Kaye worried that the legislation was overbroad and vague. He noted that a lack of clear definitions for “extremism,” “discrimination,” and “inciting hatred” could lead to arbitrary and abusive interpretations of the law. He also raised concerns that social media companies would be overzealous and inconsistent in their removal of content in fear of fines, potentially “penaliz[ing] minorities while strengthening the position of dominant or powerful groups.” Another concern of Kaye’s was the legislation’s privatization of the judicial functions—effectively, it delegates censorship duties to the private sector, leaving social media companies to determine what is and isn’t illegal.

Advertisement
Advertisement

The French government replied to Kaye’s statement (which he told me is positive in itself, since governments don’t always respond), affirming the country’s commitment to freedom of expression and arguing that the bill does indeed appear to comply with human rights law. But many of Kaye’s concerns remain. In addition to those articulated in his letter, he fears that the law will result in considerable self-censorship, both in terms of what platforms allow on their sites and what users are willing to discuss. He’s also worried about the increased use of artificial intelligence in content moderation. Social media companies, including Facebook and Twitter, already use A.I. to moderate speech, but the increased demands on these companies will further entrench the technology in their anti-hate speech and misinformation campaigns.
And while A.I. can be effective in detecting imagery of, for example, terrorist attacks, dead bodies, and child pornography, we still haven’t developed the contextual analysis for A.I. to be able to decipher the nuances of hate speech, Kaye said.

Advertisement
Advertisement
Advertisement
Advertisement

There’s also the issue of emboldening tech giants even further. Although the new law may appear to put restraints on major social media platforms, it’s actually granting censorship powers to the most highly capitalized companies, since they’re the only ones that can afford it. “The paradoxical effect is to give [these] companies more power in terms of deciding what’s legitimate and also giving them more power over the market to the potential hindrance of innovators,” Kaye said.

Advertisement
Advertisement

While the merits of France’s legislation were debated over the past couple of years, we’ve had the chance to watch another controversial hate speech law play out elsewhere in Europe. Germany’s Network Enforcement Act, or NetzDG, which was approved in 2017 and went into effect the following year, is similar to France’s—with a 24-hour deadline, fines in the millions of euros, and vaguely defined transgressions. In 2018, Wenzel Michalski, Germany’s director at Human Rights Watch, called NetzDG “fundamentally flawed,” voicing many of the same concerns as Kaye.

Advertisement

After NetzDG was enacted, Yascha Mounk detailed some of the law’s problems and consequences in the New Republic. Extremists have found ways to sidestep the law, according to Mounk, using coded messages to convey hate. Mounk also argued that these laws “actually strengthen the resolve of those on the right” by helping them weave narratives of oppression. He noted that this played a role in helping the country’s far-right populist party Alternative for Germany—which claims it’s being censored and bills itself as the only party with the “courage to tell the truth”—get to the Bundestag, with 94 seats, in the fall of 2017. (Another cause for concern, Mounk wrote, is that populist authoritarians may be elected in democratic countries and misdirect these laws “to deeply oppressive uses.”)

Advertisement
Advertisement
Advertisement

The European Commission asked France to wait on the legislation until the Digital Services Act, an overhaul of how the EU regulates tech companies, is introduced across Europe. The act, which is still being drafted, will replace the 20-year-old e-Commerce Directive, which currently governs online services in the EU, and it will define platforms’ liability for illegal or harmful content. By disregarding the European Commission and charging ahead, France may be setting a precedent for hasty speech regulation amid the coronavirus pandemic in Europe and farther afield. In fact, the hate speech legislation is the first law unrelated to COVID-19 that the country’s Lower House of Parliament has taken up since March, CNN reported, and Simon Chandler wrote in Forbes that it appears the French government has exploited concern over COVID-19 misinformation to finally get it through. Hate speech intersects with misinformation, and COVID-19 has spotlighted the harmful content swirling around on social media, spurring tech companies to implement some of their most aggressive policies on content removal to date. As fear over social media content has peaked amid the pandemic—and platforms are hiring more content moderators and fact-checkers—it’s entirely possible that other countries will see this moment as an opportunity to introduce similar legislation.

Advertisement
Advertisement
Advertisement

One of the most troubling parts about the law and its evident drawbacks is that it’s so clearly well-intentioned. As the French government wrote in response to Kaye’s statement, the legislation “aims to protect citizens against [hateful] comments, which far from conveying ideas or opinions, seek to undermine human dignity and the integrity of individuals.” If regulating hate speech through such laws is unlikely to achieve this, is it even possible to address the problems of speech raised by social media platforms within the framework of international human rights law?

Advertisement
Advertisement

Kaye, at least, believes it is—he even pointed out that France took a promising approach in 2019 when it established a presidential commission that prioritized company transparency and multistakeholder oversight in addressing hate speech. The government eventually shelved the commission, which Kaye finds baffling: “They had a blueprint right in front of them,” he said.

Advertisement

A more successful strategy, Kaye believes, requires thinking beyond whether speech is lawful or unlawful and determining what governments can restrict. It would focus instead on creating transparency and oversight standards. That might look like, for example, requiring social media companies to disclose their methods for regulating certain kinds of content, which would give governments the chance to observe and evaluate how the companies are operating before restricting speech. Or it could include having companies develop a content regulation policy that shows they’re respecting human rights before being allowed to operate in a given jurisdiction. It might also take the form of the government being regulated as well, with any content demands from the government going through a court or an independent administrative agency.

The possibilities are there, Kaye said, but governments just aren’t looking for them—or, in France’s case, they’ve looked them in the eye and then ignored them. “There are so many interesting, innovative, but human rights­­–sensitive approaches that governments are just not taking on board,” he said. “It’s kind of a failure of imagination.”

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement