On Monday, YouTube announced a handful of policy changes that affect how younger minors use the platform. Children under 13 will no longer be able to livestream on YouTube without an adult clearly present in order to “limit the risk of exploitation.” The Google-owned company, which already bans younger children from having their own accounts, will also limit how the platform recommends videos “featuring minors in risky situations,” a move that YouTube says will rely on “new artificial intelligence classifiers” to identify such content.
This comes on the heels of a shocking report in the New York Times that described how YouTube’s recommendation algorithm essentially curated strings of videos of barely clothed children, amplifying their reach and making them easily available to pedophiles. In one case, a 10-year-old girl in Brazil uploaded a video in which she is playing with her friend in bathing suits; once promoted by YouTube, the video was watched 400,000 times.
The safety of younger users has been an issue on YouTube—and social media in general—for years, but YouTube’s attempts to address this problem might be less familiar than the work done by Facebook and Twitter to deal with objectionable content. YouTube’s efforts illustrate just how difficult it is to insulate children from inappropriate behavior on the web, whether through software or through human moderation, and particularly at YouTube’s massive scale.
February 2015: YouTube introduces YouTube Kids, meant to be a safe version of the platform for young users. It has parental controls, educational content, and limitations on what can be uploaded and searched. In the years since, however, the service’s kid-friendly bubble has been punctured by conspiracy-themed content and worse.
November 2017: In what became known as the “Elsagate” controversy, seemingly kid-friendly videos on YouTube Kids turned out to include disturbing scenes. In one of the most notorious examples, Elsa from the movie Frozen is impregnated by Spider-Man. The controversy led to tighter rules about how videos featuring children’s characters could be uploaded and monetized, and the deletion of channels and videos that violated these rules.
February 2019: Amid a controversy in which pedophiles swarmed the comments sections of children’s videos, a number of major advertisers, including Coca-Cola, Amazon, and Disney, pulled their business from YouTube.
March 2019: YouTube disabled comments on many videos featuring children. While the company acknowledged that the move would disappoint some creators whose comment sections fostered productive discussions, “we also know that this is the right thing to do to protect the YouTube community.” a spokesperson for the platform told the Verge.
YouTube’s struggles with abusive, exploitative, bigoted, and otherwise inappropriate content extend far beyond content aimed at kids, as another incident from just this week demonstrates. On Wednesday, YouTube announced it would remove thousands of hateful and extreme videos while tweaking the platform so that it is “raising up authoritative content, reducing the spread of borderline content and rewarding trusted creators.” As part of these actions, YouTube said it would demonetize videos by a prominent right-wing creator who repeatedly insulted the sexual orientation and Cuban American heritage of Carlos Maza, a journalist at Vox. Once again, a platform used by 2 billion people is playing a familiar children’s game: whack-a-mole.