Last week, Meta, Discord, and Apple admitted to a deeply bizarre and troubling mistake: They had all handed user data over to hackers who forged law enforcement emergency data requests via compromised email accounts. It’s a scary situation not just because user information was disclosed to hackers posing as government officials—though, of course, that’s a huge problem—but also because there’s no straightforward way for the companies or law enforcement officials to solve this moving forward, at least not without having to face huge trade-offs.
Emergency data requests are a way for law enforcement officials to obtain information without having to go through the standard legal processes to obtain a search warrant or subpoena authorized by a judge. Obviously, those legal processes are in place for a reason and provide valuable checks on law enforcement access to data, so emergency data requests are used only for, well, emergencies. If officials have a reason to believe that there is an urgent need for the data they are seeking—someone’s life is at risk, for instance—then they can submit these requests via email or online portals without having to obtain a judge’s authorization. The company that receives the emergency request can then decide whether to turn over the requested information based on the nature of the request and whether it meets their criteria for an emergency. (Since these requests are not court-issued warrants, companies can decide whether or not to comply with them.)
Security reporter Brian Krebs linked these fake emergency data requests to the Lapsus$ hacking group in late March. Last week, two teenagers in the U.K. were charged with cybercrimes linked to Lapsus$, which has drawn attention recently for breaching several major tech firms, including Microsoft and Nvidia.
Krebs noted that a hacker linked to Lapsus$ and its predecessor, the now defunct hacking group Recursion Team, had posted in an online forum offering to sell compromised government email accounts that “can be used for subpoena for many companies such as Apple, Uber, Instagram, etc.” The postings suggest that the hackers rely on compromising email accounts belonging to police departments and using those emails to submit emergency data requests to tech firms. Apple, Meta (the parent company of Facebook) , and Discord all turned over customer information in response to forged emergency data requests, according to a report last month by Bloomberg.
It’s a clever tactic for extracting information from tech companies because there are no simple or easy solutions. Companies could decide to stop responding to any emergency data requests for fear that they might be forged, but that would potentially jeopardize people’s safety in cases where the requests were legitimate and actually served a valuable purpose. We don’t know how often these requests are literally life and death situations, of course, but Apple, for instance, on its page describing the procedure for making such requests, dictates that emergency requests should “relate to circumstances involving imminent danger of death or serious physical injury to any person … for example, instances where law enforcement believe a person is missing and in danger.” And based on the fact that Apple fulfilled 90 percent or more of the emergency requests it received in 2019 and 2020, the company seems to believe that many of the requests being submitted worldwide meet this standard. All of which suggests there could be significant costs to companies deciding they no longer wanted to respond to such requests, or even just that they would respond more slowly after verifying that requests hadn’t come from hacked accounts.
Another possibility would be for Congress to legislate stricter security measures for emergency data requests. Sen. Ron Wyden told Krebs on Security that he was requesting information from tech companies and federal agencies about fraudulent emergency data requests. This isn’t a totally new concern—Wyden and other lawmakers had already been worried about the potential for hackers to forge court orders even before the recent news broke. In 2021, Wyden, together with other senators, introduced the Digital Authenticity for Court Orders Act, which would require courts to use cryptographic digital signatures when issuing court orders in order to make it harder for scammers to forge such documents. But even if it were passed by Congress, that bill would have no effect on emergency data requests, which do not require a court order—though they could conceivably also require digital signatures.
More importantly, requiring digital signatures would not solve the problem of compromised government and law enforcement accounts being used to issue fraudulent data requests. Because just as hackers can steal credentials and get access to law enforcement email accounts, they could also very plausibly steal the credentials needed to issue digital signatures. There’s no easy technical solution to this problem—and the non-technical solutions all require taking the time to verify that the request a company receives really is coming from the entity who appears to have submitted it, a process that undercuts the core purpose of emergency data requests in circumventing the process for obtaining a court order.
As Berkeley security researcher Nicholas Weaver put it in an interview with Krebs, “It’s a fundamentally unfixable problem without completely redoing how we think about identity on the Internet on a national scale.”
There are countries that have tackled the problem of how to manage online identities at a national scale. For instance, in Estonia the government famously issues everyone a digital identity in much the same way that other countries issue citizens passports. But the United States is a very long ways from such a centralized government-issued digital identity framework that could enable companies to verify the identity of senders of online data requests. And even if such a framework existed, it could still be susceptible to compromises and theft.
That means that it will probably continue to be up to individual companies to vet the emergency data requests they receive and decide whether they want to respond. There’s just no way to respond to online requests immediately and also do a careful job verifying who submitted them online. And when the stakes are trying to find a missing person or protect someone who is in immediate danger, that verification process may not make sense—even if it could stop companies from responding to fraudulent requests. It’s possible that moving forward they will be more wary of providing information in response to these requests and that may be a good thing if fraudulent emergency data requests are on the rise.
But it’s difficult to see how companies could decide to stop responding to such requests entirely, or even implement a very time-intensive vetting process to establish their authenticity, given the urgency of these requests. In the end, it’s quite possible that responding to a few fraudulent requests will be seen as a reasonable price to pay for being able to help law enforcement in emergency situations—and it’s quite possible that calculus will be correct.