On Thursday morning, a 49-year-old man named Floyd Ray Roseberry from Grover, North Carolina, parked a pickup truck on a sidewalk near the Library of Congress and U.S. Capitol and threatened to set off a bomb. He surrendered to authorities without incident by midafternoon, but not before livestreaming his drive to D.C. and part of his standoff on Facebook. Politico reports that Roseberry began recording at around 7:30 a.m. and was able to keep broadcasting until Facebook cut the stream off and took down his account shortly before 1 p.m. During the stream, Roseberry went on a tirade about a coming “revolution,” accused President Joe Biden of providing military equipment to the Taliban, and lamented about lack of access to health care while claiming that Afghan refugees would obtain it. Days earlier, he had posted another video about former President Donald Trump being reinstated to the White House.
Though it wasn’t until about an hour and a half after Roseberry started broadcasting his tirade on Facebook that his bomb threat became widely known to the public, he nevertheless was able to stream for several more hours until moderators intervened. The company told Slate in a statement, “We are in contact with law enforcement and have removed the suspect’s videos and profile from Facebook and Instagram. Our teams are working to identify, remove, and block any other instances of the suspect’s videos which do not condemn, neutrally discuss the incident or provide neutral news coverage of the issue.” The incident is reminiscent of the 2019 mass shootings at two mosques in Christchurch, New Zealand, during which time the perpetrator livestreamed the massacre on Facebook for 17 minutes. Facebook took down the initial livestream and removed 1.5 million videos that included the shooting footage within 24 hours, yet NBC continued to find clips of the attack on the platform six months later.
Could Facebook have taken down Roseberry’s stream quicker than it did? And has the platform improved its response to livestreamed terrorism since Christchurch?
According to Hany Farid, a computer science professor at the University of California, Berkeley, it’s likely that Facebook’s automated systems would not have been able to detect Roseberry’s livestream by themselves. After all, his broadcasts mostly consisted of him sitting in his car, which isn’t a particularly visually distinctive tableau for image-matching technology. “When you’re talking about billions of uploads a day, the technology is simply not refined enough or accurate enough to work fully automatically, so they largely rely on human moderators,” said Farid. “You might be able to excuse the Facebook and the YouTubes of the world for catching the live stuff that happens fairly infrequently, but what will be interesting now is to see if they can stop the reuploads.”
While Facebook’s technology might not be able to catch these violent livestreams from the get-go, it has a lot more power to make sure that the resulting clips don’t get recirculated. The platform uses a technology called hashing, which extracts digital signatures from certain notable frames in a video. Those signatures, or “hashes,” then go into a database that other uploaded videos can be matched against. Farid, who worked with Microsoft to develop a hashing technique for combatting child pornography, says that it’s suspect that such systems could be so useful for quickly taking down copyrighted material and adult content—which they are—while seemingly failing to do so for clips of the Christchurch shooting. “YouTube, when they were threatened with billion-dollar lawsuits, got really good at finding copyrighted material,” Farid said. “I think they [Facebook] just never invested in the technology. When they tell you how hard these problems are, that it’s really hard to find terrorism, ask them why they’re so good at finding adult pornography right away.”
Former Federal Trade Commission Chief Technologist Ashkan Soltani isn’t so optimistic that Facebook has improved its approach to videos of terrorism since Christchurch. “No. Absolutely not,” he wrote over email. “They’ve learned that they can get away by paying lip service to important social issues like content moderation, disinformation, and bullying—and instead focus their actual resources chasing grandiose tech-bro dreams of ’metaverse’ realities which they can more fully control (and monetize).” He added that there is an inherent conflict between removing “rapidly viral content” and Facebook’s “core profit motives,” which creates an incentive to value the most compelling content, a category that includes extreme and provocative speech. Soltani also questioned why Roseberry was able to stream for so long before Facebook pulled his account given the platform’s supposed “viral content review system,” which is supposed to act as a sort of circuit breaker to pause the algorithmic promotion of potentially dangerous content. “At the very least, these systems should have flagged the content for human review, although the fact that the bomber was able to stream for hours without intervention suggests otherwise,” he said.
For Farid, the real test of whether Facebook has learned any lessons from Christchurch will be how many of the would-be D.C. bomber’s videos end up resurfacing, and what the engagement is online. In an optimal scenario, the video would be hashed soon after the incident, the data would be uploaded to a database, and that database would be shared with every major platform. As of yet, it’s unclear how many copies of the video Facebook has taken down. “If the technology is working right, that video should never show up again,” Farid said. “If it’s showing up, how long is it staying online, and how many views is it getting? If we see a repeat of Christchurch, then they’ve learned nothing.”
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.