Future Tense

Why Some Privacy Experts Are Worried About Apple’s New Plan to Fight Child Sexual Abuse Material

Surveillance on such a large scale always brings privacy concerns along with it.

A hand holds an iPhone.
Photo illustration by Slate. Photo by Getty Images Plus.

In 2016, Apple famously refused to create backdoor software to give the FBI access to an iPhone used by a mass shooter in San Bernardino, California. It said doing so would undermine its commitment to protecting users’ devices from law enforcement. However, Apple’s steadfast dedication to device privacy may be waning.

On Thursday, Apple announced new measures to prevent minors from viewing sexually explicit content on iMessage. It will also began scanning images uploaded to users’ iCloud accounts for child sexual abuse material, or CSAM. Eradicating CSAM is a noble goal—yet many technology experts are worried that Apple may be on the verge of undermining phone privacy forever.

Advertisement

In the words of Bruce Schneier, a public interest technologist who is a fellow at Harvard’s Kennedy School and a board member for the Electronic Frontier Foundation, “This is pretty shocking coming from Apple, which is generally really good about privacy. It opens the door for all sorts of other surveillance, since now that the system is build [sic] it can be used for all sorts of other messages.”

Advertisement
Advertisement
Advertisement

To understand why, it helps to first wrap your mind around what Apple is doing here. There are three key technologies included in this release: 1) scanning iMessages for sexual content sent to or from kids; 2) searching new iCloud images for child sexual abuse material; and 3) updated Siri and search information for users to seek support or report abuse. These new features will all be rolled out with the release of iOS 15 for the iPhone and macOS Monterey for Apple computers later in 2021, but it’s the first two that are really raising eyebrows.

Advertisement

The iMessage scan only works on child accounts (meaning those for users 12 and under) set up in Family Sharing, according to technical information released by Apple. The system uses on-device machine learning to evaluate image attachments in iMessage for sexually explicit photographic content; if it’s detected, the photo will be blurred, and the user will receive a warning about the content. If the child taps on it and still views the (now unblurred) photograph, parents on the Family Sharing account will then be notified. This system works similarly for users under 12 attempting to send sexually explicit photographs. Deirdre Mulligan, a professor at the University of California–Berkeley’s School of Information and faculty director for its Center for Law & Technology, notes that this iMessage system isn’t a tool for reporting or blocking—instead Apple is “keeping it ‘within the family.’ ”

Advertisement
Advertisement

After this system was first announced, some observers were concerned that it might apply to 13- to 17-year-olds, too, but Apple has since clarified this is not the case. But another fear still applies: that this iMessage monitoring system could be extended to analyze messages for other types of restricted content for users of all ages—like, say, authoritarian leaders using the technology to beef up anti-LGBTQ policies or, as Cambridge University security engineering professor Ross Anderson worries, to ensnare dissidents speaking ill of a regime or sharing anti-government memes via text message. Others, like technology lawyer Kendra Albert, also worry that “keeping it within the family” could be damaging or even dangerous for queer children who may be forced to have discussions with unaccepting or abusive parents about sex and sexting—which could be especially problematic if parents could later opt to have this technology applied to accounts for older children.

Advertisement
Advertisement

That brings us to the second part of Apple’s move. Detection of child sexual abuse material will occur on photos uploaded to iCloud, not images just in their library—at least for now. However, the scanning itself will occur on the actual device, which is part of what concerns security experts. Apple will employ a tool called NeuralHash that creates a unique numeric code to represent images; this numeric code or “hash” will then be compared with image hashes contained within a database provided by the National Center for Missing and Exploited Children. If an image meets a certain similarity threshold to hashes for CSAM, image fingerprints and visual derivatives (basically super low-res versions of the imagery) will be reviewed by Apple’s team to confirm a CSAM content violation. Apple claims that the thresholds are high enough that only 1 in 1 trillion accounts should be falsely flagged. If a violation is confirmed, Apple will suspend the iCloud account, and notifications will go out to NCMEC and law enforcement agencies.

Advertisement
Advertisement

This is a big change for Apple, which is not required by law to seek out and monitor users’ devices for CSAM or any other criminal activity—it only must report this material if it does see it. In recent years, Apple has reported a very low amount of CSAM violations, but it has also seen pushback from both politicians (as Riana Pfefferkorn, a research scholar at the Stanford Internet Observatory, put it, “the Five Eyes governments [the U.S., the U.K., Canada, Australia, and New Zealand] have been releasing public statement after public statement castigating them”) and civil society for its lack of proactive monitoring. A 2020 New York Times analysis found Apple reported dramatically fewer images than other big tech companies. However, even companies that have historically reported more than Apple have not gone as far as putting the scanning system directly into device operating systems.

Advertisement
Advertisement

At face value, hash analysis seems like a valuable tool for locating CSAM and cracking down on it, especially since the hash analysis is limited to just this narrow category of material as defined by experts. Mulligan says that Apple isn’t defining or identifying CSAM itself—that’s being handled entirely by experts at NCMEC. All Apple is doing is locating material that has already been defined and identified as CSAM in the past. One benefit of this is that by having NCMEC experts conduct the initial classification, you won’t have a situation where someone will have to directly identify CSAM imagery, “which is both a violation of whatever child is being exploited in that video … and the person who’s doing that work,” Mulligan told me.

Advertisement
Advertisement

However, not everyone is so confident in the measures being outlined. Pfefferkorn points out, “This is happening at the operating system level on iOS, on iPad OS, and that’s different. Moving something down from the cloud into being on device … is a huge paradigm shift.” Pfefferkorn is concerned that this is just a test case for Apple. CSAM is an easy way to start, because it’s “politically radioactive to be seen countering or trying to push for better privacy protections in the face of harm to children.” While the company has pledged not to engage if “asked to do this for a political dissident … that doesn’t reassure me,” she said. She imagines that “there will immediately be pressure” on Apple to expand to, say, use this same approach with hash-sharing databases like GIFCT, which is intended to fight the spread of terror material.

Advertisement

Here, Pfefferkorn and Mulligan agree: This could be a dangerous precedent if extended to other types of content. What if Apple does expand to begin scanning devices for extremist material? What if a journalist or human rights advocate is documenting abuse (like beheading videos, which, without context, may be considered objectionable) through maintaining a record of terrorist content?

Advertisement
Advertisement
Advertisement

However, Mulligan doesn’t think that the slippery slope argument about future applications of a hash-analysis system is enough not to use this tool. She admits that there are always going to be slippery slope arguments, but having a narrowly focused and reproducible framework for content moderation is critical to prevent those harms from manifesting. She trusts in the actors that Apple has involved, specifically the expertise and standards of NCMEC: “Like, this is the Center for Missing and Exploited Children. This isn’t an industrywide hash database that only industry knows what’s in it.”

Advertisement

The Electronic Frontier Foundation, a digital rights group, disagrees, saying in a statement that Apple’s plan is not a “slippery slope; that’s a fully built system waiting for external pressure to make the slight change.” It also notes that “it’s impossible to build a client-side scanning system that can only be used for sexually explicit images.”

It is also not entirely clear what Apple’s plans for preventing false positives for CSAM are. While the NCMEC database has been curated by experts, Apple’s hash analysis system could have unforeseen flaws. While the company boasts an overwhelming success rate in its hash analysis, some technology experts are calling BS. Jonathan Mayer, an assistant professor in Princeton University’s department of computer science, pulled no punches in critiquing the depth of information released by Apple on the new content moderation plan. Mayer said on Twitter than Apple’s documentation on the subject “uses misleading phrasing to avoid explaining false positives”—while also failing to explain how hashes will be the same for all users.

What is for sure is that Apple’s new policies represent a significant shift. Whether we slip further on this surveillance slope, well, that’s (terrifyingly) up to the tech companies and a little on public pressure from all of us.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement