Future Tense

The Red Flag Laws That Trump Wants to Stop Mass Shooters Already Exist. Are They Working?

Police cars and law enforcement officials at the crime scene.
El Paso, Texas, police and the FBI continue to investigate the Cielo Vista Mall Walmart shooting. Mark Ralston/AFP/Getty Images

On Monday, President Donald Trump spoke about the shootings in El Paso, Texas, and Dayton, Ohio, and offered several ways we might prevent mass shootings, almost none of them involving gun control. Along with calling for improved mental health services and less violence in video games, the president also suggested that social media companies and “red flag laws” could help to stop mass shooters before they have a chance to act. Red flag laws, also known as extreme risk protection order laws, allow judges to temporarily prohibit a potentially dangerous individual from possessing firearms.

Advertisement

“I am directing the Department of Justice to work in partnership with local, state, and federal agencies, as well as social media companies, to develop tools that can detect mass shooters before they strike,” Trump said, adding later on, “We must make sure that those judged to pose a grave risk to public safety do not have access to firearms, and that, if they do, those firearms can be taken through rapid due process. That is why I have called for red flag laws.”

Advertisement
Advertisement
Advertisement

That was the extent of the proposal, so it’s unclear what exactly Trump has in mind for social media companies to help prevent mass shootings. Internet platforms have come under scrutiny again after the suspect in El Paso apparently posted an anti-immigrant manifesto on 8chan—a message board frequented by trolls and white supremacists—shortly before the shootings. The perpetrators of the Poway, California, synagogue and Christchurch, New Zealand, shootings also posted to 8chan before the attacks. Critics have also pointed to more prominent tech companies like Facebook, Twitter, and YouTube, where violent white supremacist content often festers.

Advertisement

There’s good reason for some immediate skepticism, especially since Trump’s framing of red flag laws sounded more than a little like the “precrime” policing of Minority Report. While social media platforms have made improvements over the past year in identifying and removing hateful and racist content in violation of their rules—and even there, users are constantly pointing out the ways they fall short in removing such material—it’s even harder for them to discern whether a certain post is a warning sign that a user is about to commit violence. Given the gargantuan volume of content that goes up on Twitter and Facebook every day, it may be impossible for law enforcement and the companies themselves to detect red flags for violence on the platforms in time to prevent a mass shooting from happening, and artificial intelligence tools are not advanced enough to automate this task. It can also be difficult to determine whether users are just spouting off or actually primed to commit violence just based on what they post on social media. Being too heavy-handed in labeling people as potential shooters based on their internet activity raises civil liberties concerns.

Advertisement
Advertisement

Despite these challenges, though, there are state red flag laws currently on the books that allow judges to temporarily bar people from accessing guns based at least partly on their social media activity. So far, it has not produced the social media panopticon that the worst-case fears would suggest. “This is a type of law that fills an important gap in our existing firearm policy infrastructure,” says Beth McGinty, a Johns Hopkins professor who works with the university’s Center for Gun Policy and Research. “Most state and federal firearm policies predicate firearm removal based on a criminal conviction. … In many cases, including multiple examples of mass shootings, there can be individuals who are behaving dangerously but don’t have one of those prohibiting criteria.”

Advertisement
Advertisement

Red flag laws generally require police or acquaintances of a potentially dangerous individual to file a petition explaining why they shouldn’t be allowed to use a gun. Over the course of two hearings, a judge then looks at the evidence and decides whether that person can safely have access to firearms. Violent social media posts can be a part of a judge’s decision, but it’s usually only one piece of the puzzle. While McGinty says that it’s not out of the question for a judge to revoke gun access solely based on social media activity, there are usually other factors considered as well, such as a recent divorce or death in the family, a history of cruelty to animals, and drug and alcohol abuse. “These criteria have been shown in research to be good risk factors for future violent behavior, so the judge has a guide from which to draw,” McGinty says. “It is not, of course, a perfect predictor, which is why the restriction is temporary.” The orders usually last for a year, though petitioners can request to renew them.

Advertisement
Advertisement
Advertisement

At least 17 states currently have red flag laws, including California, Illinois, and Florida. Connecticut was the first to pass such a measure in 1999, though most red flag laws have gone into effect over the past decade. Advocates and state lawmakers in recent years have specifically pointed to alarming social media posts as a reason for passing the legislation. “Facebook and other platforms have been making a concerted effort to alert trusted individuals or law enforcement to high-risk posts online,” Citizens Crime Commission President Richard Aborn said in a 2018 New York Senate press release calling for a red flag law, which the state eventually passed this January. “But these efforts are useless if the people alerted do not have the tools to intervene.”

Advertisement
Advertisement

But because most of these state laws have only been in effect for a few years, we don’t yet know how effective extreme risk protection orders are for preventing mass shootings, much less whether certain types of social media posts are a good point of reference for issuing them. “[Mass shootings] happen far too frequently in our society, but from a statistical perspective, they are very rare events,” says McGinty. “It’s really difficult to study in a valid way the effect of a single policy on reducing rare events like mass shooting.” There is, however, some preliminary evidence suggesting that extreme risk protection orders have been effective in stopping suicides.

Social media companies could conceivably work within this existing extreme risk protection order system, flagging posts for courts and police to consider. Whether it’s a good idea to have the companies play a role in the temporary gun revocation process, though—and have the encouragement of the federal government to analyze more and more user content for potential policing—is a different question.

Advertisement