For years, the answer to the question “What’s the most secure consumer device?” has been easy to come up with: the iPhone. The most secure against criminal malware courtesy of Apple’s carefully maintained App Store, the most secure against government surveillance and court orders courtesy of its default full disk encryption and encryption of iMessage and FaceTime conversations, the most secure against novel exploits and newfound vulnerabilities courtesy of regular security updates pushed out to devices automatically. So an announcement from Google this week was devastating: Several compromised websites had been spreading malware to iPhones that would allow the perpetrators to steal credentials and photos as well as monitor users’ messaging activity and even location data. Devastating not just for Apple, but for the many people (including me) who use iPhones and have long regarded them as one of the most secure devices ever to achieve mainstream success with everyday consumers.
The malware identified by Google targets every version of the iPhone’s operating system released in recent years from iOS 10 through iOS 12, up until the iOS 12.1.4 update released earlier this year, which patched the relevant vulnerabilities. Google alerted Apple to the five distinct exploit chains it had identified targeting iOS back at the beginning of February, prompting the update, which was issued a week later. The prompt patch in response to Google’s alert is to Apple’s credit, but it doesn’t change the fact that the websites distributing these malware strains—none of which has been identified by either Apple or Google—were operational since 2017 and, according to Google’s estimates, were visited by thousands of people each week.
The malware targeting iPhones was apparently distributed indiscriminately to those site visitors—it was not being used to target one or two especially valuable targets, but rather to gather information off the phones of iPhone users in bulk. As Google researcher Ian Beer discussed in his announcement of the vulnerabilities, and as Andy Greenberg and Lily Hay Newman pointed out in Wired, that represents a sea change in how we’re used to thinking about iPhone compromises. They’re supposed to be expensive and arduous devices to exploit, requiring massive amounts of time and money to get into even one iPhone—far too much time and money for anyone to be able to bother wasting on accessing the iPhones of any but the most high-profile targets.
Beer writes that he hopes “to guide the general discussion around exploitation away from a focus on the million dollar dissident,” referencing a term coined by researchers at the Citizen Lab to describe Ahmed Mansoor, a human rights activist in the United Arab Emirates whose iPhone was targeted in 2016 using what appeared to be very expensive iPhone-specific exploits. So maybe it doesn’t actually cost $1 million to get into your iPhone—what do we do now?
Beer’s point—that all of us who use iPhones are at risk, not just those who are doing extremely risky work—is an important one. But it doesn’t yield a lot of clear advice or actionable steps for iPhone users, beyond the necessity of downloading the latest updates if you haven’t already. (If you go to “Settings” in your iPhone and then “General,” you can find which version of iOS you are running in the “About” menu and download any updates from the “Software Update” menu—go do it now!) If you’ve been using a vulnerable iPhone and have any important passwords saved on the device, this would also be a good moment to consider changing them, since the malware was capable of accessing the bank of saved passwords. You can switch to Android, of course, but there are no guarantees that that ecosystem will be any more secure.
Is Google secretly, quietly hoping for that to happen by releasing these exploits publicly on the day the launch date of the iPhone 11 was confirmed? I suppose it’s possible, but it’s hard to argue that Google’s behavior has been anything other than a model here. Google’s Project Zero team, which identified the exploit chains, gave Apple six months’ advance notice about the vulnerabilities before releasing any information to the public, and then provided full, detailed descriptions of the vulnerabilities and malware. And Beer is very measured in his criticism in the announcement post on the Project Zero blog—he acknowledges that all devices are vulnerable without calling out Apple specifically. He writes:
Real users make risk decisions based on the public perception of the security of these devices. The reality remains that security protections will never eliminate the risk of attack if you’re being targeted. … All that users can do is be conscious of the fact that mass exploitation still exists and behave accordingly; treating their mobile devices as both integral to their modern lives, yet also as devices which when compromised, can upload their every action into a database to potentially be used against them.
If anything, his goal seems to be merely leveling the playing field a little bit by pushing back on the public perception of iPhones as being far more secure than other mobile devices. And if Apple and Google decided to compete on security by trying to see which company could find the most serious vulnerabilities in the other’s mobile operating system, well, that would be a pretty great outcome for all of us.