Whose fault is it that nude photos of dozens of celebrities, pilfered from their iCloud accounts, were widely released online late last month? As always when it comes to the Internet, there are lots of entities involved—and therefore lots of opinions about who bore what responsibilities. Should the celebrities have chosen stronger passwords? Should Apple have required them to choose stronger passwords or limited the number of login attempts? Should sites like Reddit and 4chan have been responsible for promptly removing the photos (or links to them)?
Slate’s David Auerbach makes a compelling argument for why we should “Blame Apple” for its lax security measures. American model Joy Corrigan, best known for her Instagram work, has apparently taken his advice to heart: She’s reportedly launching a class-action suit against the company. Corrigan says she contacted Apple in July about her account being compromised, following the release of her naked photos online, and Apple told her that her password had been phished and she should change it. She did, but her account was then breached again, just days later, she says. (Corrigan seems not to have been part of the mass hack that affected Jennifer Lawrence and others, but the problem is the same.)
Corrigan’s lawsuit may not get far. She, and any others who join the class-action suit, will have to make a compelling case for why they were harmed by the leaks. It may seem obvious that having your nude photos or videos released to the public is damaging and distressing, but it’s not an easy kind of harm to classify or quantify—it’s not clearly physical or financial harm.
Typically, researchers have found that when data breaches result in financial harm, firms are much more likely to be sued—and much more likely to settle. But the iCloud case is unusual in that the breached information included not credit card numbers or Social Security numbers, but something at once much more intimate and far less likely to lead to financial fraud.
One benefit of this suit, if it progresses, may be in forcing us to think more carefully about the kinds of harm associated with data breaches beyond just financial loss. That analysis is part of a larger debate around the value of privacy and the costs of losing it, and it’s a conversation well worth having in a legal context because it may have implications far beyond just the Apple case. It could even play into questions about what harms were suffered by individuals who were spied on by the NSA and whether they have standing to take legal action of their own.
But identifying the specific harm suffered by the victims of the iCloud breach is only part of what makes Corrigan’s suit interesting and important. The other part comes back to the issue of blame and who is (legally) responsible for doing what in the context of online security. There’s a great deal of ambiguity in this area, and in some respects that may be a good thing. We don’t necessarily want the government to be issuing security requirements for private companies like Apple in the same way it mandates safety standards for cars or airplanes. For one thing, the technological security tools and threats change much more rapidly than the ones associated with transportation safety. For another, computers are used for a much wider range of functions and services than cars or airplanes, and the appropriate level and types of security may be influenced by those uses. There’s even potentially some benefit in different companies taking different approaches to securing their computer systems—it means that someone who finds a way to breach one company’s protections cannot necessarily use that to get into everyone else’s systems.
Because of this, when the government has gotten involved in liability issues surrounding security breaches, it has typically done so without specifying exactly what a company must do to be absolved of that liability. For instance, the Federal Trade Commission is currently in the process of arguing a suit against the Wyndham Worldwide Corp., the owner of a large hotel chain that experienced three breaches of its customers’ personal data between April 2008 and January 2010.
In its complaint, the FTC alleges that Wyndham failed “to maintain reasonable security” and lists 10 specific “inadequate data security practices,” including storing unencrypted payment card information and failing to require complex passwords. But none of these practices is actually illegal—and each, individually, would not necessarily constitute inadequate security, the complaint implies. Rather, it was the combination of these practices that “taken together, unreasonably and unnecessarily exposed consumers’ personal data,” the FTC alleges. This is, among other things, a handy way of sidestepping the question of what, specifically, Wyndham—and other companies—must do to maintain reasonable security.
The same logic could perhaps be applied to Apple. The company has been faulted for several individual security practices—including failing to limit how many times a person enter an incorrect password to log in to its Find My iPhone service (also known as “rate limiting”) and not enforcing two-factor authentication for iCloud logins. But perhaps no single one of these practices on its own is as important for determining liability as the combination of all of the security choices Apple made. (Slate’s Auerbach lists five). Or it’s possible that a particular security practice will catch a court’s attention as especially negligent—a judge who suggested, for instance, that Apple’s decision not to rate-limit password guesses on Find My iPhone was at the heart of its liability could have far-reaching impact. Presumably, other companies would then rush to rate limit login attempts (if they haven’t already) and that might be a good thing—but it might also have other, unforeseen consequences. If my email locks out for five minutes after five incorrect login attempts, for instance, it becomes harder for someone to guess my password by brute force—but much easier for that person to lock me out of my email by entering five wrong passwords every five minutes so that I can never log in.
Issie Lapowsky wrote a piece in Wired earlier this month titled “We’d All Benefit if Celebs Sue Apple Over the Photo Hack,” arguing that “a lawsuit over the high-profile hack may be just the thing to push Apple and other online companies to more aggressively protect the people using their services.” Corrigan’s case may or may not inspire further security changes—after all, Apple (and presumably other nervous cloud providers) have already taken steps to shore up security in the wake of the negative publicity around the incident—but if it moves forward, it could force courts to clarify companies’ security responsibilities and liability.
Those responsibilities are still very ambiguous, and there may even be some advantages to keeping them so. But in the absence of a clear culprit—that is, the person (or people) who stole and distributed the photos—it’s natural to look for someone else to blame and some way to prevent future thefts. If we could lock up the responsible parties, that might deter some future attempts, but if we can’t even do that, then at the very least we need to do something that will make it harder for others to succeed.
A lot of third parties contributed in one way or another to that success, and who you blame depends partly on what you view as the crux of the crime—the taking and storing of naked photos, or the ability of someone else to access them, or the ease with which they were plastered all over the Internet. Another—perhaps more practical—way to choose is to look at which of these actors could easily have the greatest impact in changing its practices and policies. There are a lot of iCloud users and risqué websites out there, and we’re unlikely to be able to change the behavior of all of them—on the other hand, there is a more limited number of popular, personal cloud storage services on par with Apple’s. So even if you don’t believe that they’re at fault for the incident, they are still in many ways the actors best poised to address it.
This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.