Future Tense

YouTube’s Policy on Hacking Videos Makes Everyone Less Safe

A phone sits on a flat surface. The phone is displaying the YouTube "play" video symbol of a red box with a white triangle.
Christian Wiediger/Unsplash

There are certain cybersecurity mistakes we seem destined to repeat over and over and over again: forcing people to change their passwords at regular intervals, for instance, which research has shown does not make accounts more secure, or acquiescing to online ransom demands, or punishing people for figuring out clever ways to compromise computer systems and then trying to inform others about the flaws they’ve discovered. That last mistake came up just last week, when many people noticed that YouTube had updated its examples of videos that violate its policies against “harmful or dangerous content” to include “Instructional hacking and phishing: Showing users how to bypass secure computer systems or steal user credentials and personal data.”*

YouTube told the Verge that the updates to its examples of policy-violating content actually occurred in the spring, but the decision drew attention just before the holiday weekend when Null Byte, a channel devoted to ethical hacking, was unable to upload a July 4 video on how to launch fireworks over wireless networks. It turned out that that YouTube was punishing the channel because of a previously uploaded video highlighting a technical vulnerability. After significant outcry from the security community, YouTube later reversed its decision to block Null Byte from uploading new videos. A platform as large as YouTube will inevitably make some wrong calls about what videos should or should not be blocked. But the larger issue is not whether one video—or one channel—crosses the line from ethical to dangerous content. It’s whether tech companies like YouTube’s parent Google view security researchers and their findings as threats and mischief-makers or as useful and important allies.

A quick scan of the video titles on the Null Byte channel makes clear how it could have drawn unwanted negative attention. Sample videos include “Take Over Sonos Smart Speakers With Python” and “Steal User Credentials Stored in the Firefox Browser With a USB Rubber Ducky.” Null Byte isn’t the only channel on YouTube dispensing hacking tutorials, of course, but it has some of the more useful and educational ones. (For comparison, try a YouTube search for videos on “how to hack someone’s Gmail account.”)

The notion that the videos posted by Null Byte are harmful or dangerous dates back to a deeply misguided idea espoused by some tech firms that anyone who finds a vulnerability in their products or figures out a way to compromise their software is an enemy—and a criminal. It’s not a new idea. For two decades, companies have taken advantage of both the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act to go after security researchers looking for vulnerabilities, even when they clearly just want to raise awareness about the problems they identify.

But while companies like Oracle, Sony, HP, and Blackboard have been going after security researchers for years, it is new to see Google, by way of YouTube, embracing this particularly counterproductive approach to blocking content on “instructional hacking.” The whole point of making videos like the ones Null Byte publishes is to make everyone more aware of security vulnerabilities and better informed about how they work and what to do about them. Blocking those types of videos just serves to make us all less secure by allowing the vulnerabilities they describe to remain unaddressed.

Of course, it is possible to use the techniques described in some of these videos for malicious—and illegal—purposes. But if the video-makers actually wanted to steal sensitive information or sell their vulnerabilities on the black market, then, presumably, they would not be making videos that give away the information for free to any interested viewer. If you should someday find yourself in the position of discovering a computer security vulnerability, arguably one of the most responsible (and least lucrative) things you could do with that information is create a detailed tutorial explaining your discovery and then make it available to everyone free of charge.

What’s truly harmful and dangerous is not instructional hacking videos but the ethos that leads companies to treat the people who make them like criminals, or list them alongside videos that provide instructions on how to “create drugs” or “build a bomb meant to injure or kill people” or videos “promoting or glorifying violent tragedies, such as school shootings.” Certainly, there are risks to making videos publicly available that provide information on how to steal credentials or take over smart speakers. But the risks of not publishing those videos at all are much, much greater.

Correction, July 10, 2019: This article originally misstated that YouTube had updated its policies on what constitutes “harmful or dangerous content” to include “instructional hacking and phishing.” YouTube did not update its policies; it added “instructional hacking and phishing” to examples of content that violates its policies.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.