Last week, Australia’s Parliament reacted to the Christchurch massacre by rushing an amendment to its criminal code on sharing abhorrent violent material that passed its Senate and House of Representatives in approximately 48 hours. While there is undoubtedly a grave and urgent need to prevent terrorists and violent extremists of all stripes from exploiting internet platforms to spread vile and inflammatory content, hastily drafted laws passed under pressure tend to create new problems while doing little to counter such threats. The history of the internet is riddled with problematic laws haphazardly passed in the wake of horrific violence.
The USA Patriot Act, passed within weeks of Sept. 11, has become shorthand for broad expansion of security powers in the immediate aftermath of a terrorist attack. Passed with only one senator opposing it, the sheer breadth of its provisions defies easy summary. Its business records provision, which at the time it was enacted drew concerns about the government accessing library records, became infamous in 2013 when Edward Snowden revealed it provided the legal basis for the secret collection of the phone records of anyone in the United States. Although this program was defended as critical to combating terrorist attacks, the Privacy and Civil Liberties Oversight Board would conclude that Section 215 was of “minimal value in safeguarding the nation from terrorism.” As one of the law’s authors, Rep. Jim Sensenbrenner, a Republican from Wisconsin, said: “I can say that if Congress knew what the NSA had in mind in the future immediately after 9/11, the Patriot Act never would have passed, and I never would have supported it.” Sensenbrenner went on to become a strong proponent of reform and original co-sponsor of the USA Freedom Act of 2015, a rare instance of the ratcheting back of surveillance powers.
In November 2008, four days of coordinated shootings and bombings by Lashkar-e-Taiba shook Mumbai and prompted a call for greater government powers. At the time, India’s Parliament had already been considering significant changes to its Information Technology Act, with provisions on blocking websites and state surveillance initially proposed years before. But terrorism provided renewed urgency and led to the passage of the IT Act amendments within a month of the attacks, without parliamentary debate, together with a package of other counterterrorism laws. Recent landmark Supreme Court judgments in India have curtailed portions of the IT Act for infringing on freedom of expression and established an unambiguous right to privacy. But challenges to the government’s opaque censorship and surveillance practices are pending even as new rules that would ramp up pressure on companies are under consideration.
Following a similar pattern, the spate of terrorist attacks in France during 2015 and 2016 elicited a state of emergency and expansive new counterterrorism powers affecting both privacy and freedom of expression. These included broad powers to search computers as well as the ability to block websites that allegedly glorified terrorism, all without prior judicial authorization. Regularly visiting a website that incites or glorifies terrorism was criminalized in 2016, which was struck down by the Constitutional Court in February 2017. The government reintroduced an amended version of the law later that year, only for it to be struck down once again in December 2017. Of the hundreds of convictions for “apology for terrorism” in recent years in France, relatively few have concerned direct incitement to violence, raising the question of whether criminalizing speech is an effective means of countering terrorist narratives or preventing radicalization.
Although France finally ended the state of emergency two years after it was introduced, many of these provisions were then established in ordinary law via new counterterrorism legislation. Following a May 2018 official visit, the U.N. special rapporteur on counterterrorism and human rights, professor Fionnuala Ní Aoláin, expressed “concern at the transposition of exceptional emergency-form powers into the ordinary law and the effect this may have on the protection of rights” and warned of overly vague definitions for terms such as terrorism. “Precision is essential in the use of exceptional counter-terrorism powers, and ambiguity must be remedied to ensure adherence to international human rights obligations,” wrote the rapporteur.
At the time of the Christchurch massacre, Australia had already earned the ire of both technology companies and privacy advocates for sweeping restrictions on encryption passed last year. We all share the shock and outrage at both the attack itself and the way it was amplified across social media and the internet, but the content of the new legislation as well as the speed with which it was developed is disquieting to say the least. The new law requires internet service providers and content and hosting companies to report to Australian authorities “abhorrent violent material” relevant to Australia, from anywhere in the world. Content and hosting companies that fail to “expeditiously” remove such content risk huge fines and up to three years’ imprisonment.
This kind of strict liability goes against established human rights principles for intermediaries and will likely lead to vast over-censorship of all varieties of legitimate content, with negative consequences not just for privacy and free expression, but also for security. It is the kind of approach associated with the Cyberspace Administration of China or Thailand’s lèse-majesté laws, rather than those of governments known to respect and uphold international human rights laws and norms. U.N. rapporteurs wrote the Australian government with the intention of expressing concerns with this approach, only to find the law had passed before they could send their communication.
There are understandable reasons for the rush to legislate the internet after terrorist attacks. Law enforcement and security services, under serious public pressure to “do something,” likely find proposing internet legislation a convenient path. Some may have proposals for new powers at the ready, which can be adopted with only a fraction of the scrutiny they would otherwise receive. Technology companies are a tempting target for legislation for reasons of varying fairness. And terrorist attacks in the digital age frequently present unprecedented challenges that may demand new approaches from both governments and companies alike.
While speed may seem to be of the essence, caution is essential. It is far too easy to draw up legislation with vague terms such as “violent abhorrent content” which seem to have self-evident definitions in the wake of a livestreamed massacre, but which will undoubtedly prove problematic down the road. Meaningful consultative processes are needed to ensure that a wide range of experts across governments, companies, civil society, and academia can weigh in on such provisions. Rather than assuming that the removal of content online will automatically diminish support for terrorism offline, lawmakers should demand evidence to inform their efforts. As Daphne Keller of the Stanford Center for Internet and Society has written, “For all the untold pages and grant dollars dedicated to the topic of online extremism, we still know remarkably little about when extremist speech leads to violence and how to prevent that from happening.”
Resisting the rush to legislate will not be easy. It requires elevating the debate among elected officials and their constituents, and a recognition that solving the challenges at the intersection of technology, terrorism, and human rights online takes hard work. The alternative is to keep repeating the mistakes of the past while failing to effectively counter violent extremists and ideologues in the future.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.