In July 2020, hackers compromised the Twitter handles of several high-profile users, including Elon Musk, Barack Obama, and Bill Gates, by using login credentials for one of the company’s employees to claim access to the accounts. The whole thing was an embarrassing mess for the social media company, and was only made more embarrassing later that month when Reuters reported that more than 1,000 people at Twitter had access to the internal tools that were used to compromise the accounts—but it was, at least, an easily fixable problem. Indeed, Twitter said at the time that it was taking steps to “limit access to internal systems and tools,” presumably so hackers wouldn’t have so many options if they wanted to steal employee credentials and wreak havoc on user accounts.
But was that statement—or any of the many other vague promises Twitter has made about its commitment to user privacy and security over the past decade—actually supported by any meaningful action? In a lengthy complaint filed last month with the Federal Trade Commission, the Securities and Exchange Commission, and the Justice Department, former Twitter head of security Peiter “Mudge” Zatko suggests that when it comes to protecting privacy and security, the company has been all talk and no action, hiding behind vague language about its security practices and misleading its own board of directors as well as regulators, all while doing essentially nothing to implement basic security measures.
The Washington Post and CNN broke the news of Zatko’s whistleblower complaint on Tuesday, and it’s a bombshell. In it, Mudge alleges that he was fired by Twitter in January after repeatedly raising several of the concerns in the report with higher-ups at the company. Twitter CEO Parag Agrawal said in a message to Twitter employees on Tuesday that Mudge was fired “for ineffective leadership and poor performance” and that the company was reviewing the claims in the complaint and had so far found it to be a “false narrative that is riddled with inconsistencies and inaccuracies, and presented without important context.”
If that’s the case, Twitter should move quickly to clarify that context, because some of the allegations are pretty appalling. A respected hacker and cybersecurity executive, Mudge brings significant credibility to his allegations. But they’re also worth evaluating one by one, both because of the technical and legal complexities involved, and because some of them are still hazy.
Employee Access to Sensitive Data
Rather than shrinking the number of employees who had access to sensitive information following the 2020 hack, according to the redacted version of the whistleblower complaint released by the Washington Post, “in January 2022, over half of Twitter’s 8,000-person staff was authorized to access the live production environment and sensitive user data. Twitter lacked the ability to know who accessed systems or data or what they did with it in much of their environment.” In response, Twitter explained to CNN that employees can access these resources only “if they have a specific business justification for doing so.” Both the allegation and the company’s defense would benefit from greater clarity—it’s not clear what constitutes a business justification, but it’s also not clear what sensitive user data thousands of employees had access to. Does Mudge mean that more than 4,000 Twitter employees had access to the tool used by the hackers in 2020 to claim other people’s accounts?
Even without those details, though, the complaint suggests, at the very least, the company largely failed to learn from its past security mistakes. And more than that, it deliberately misled both its own board and the regulators who were monitoring its activities about how well it had done at strengthening its security and privacy protections following a string of significant incidents. No matter how you cut it, both parts of the allegation—the bad security and the lying about it—are a big deal.
Employees’ Devices
Poor cybersecurity practices are bad for customers, but by and large they are not illegal. So while it’s certainly not great that, according to the complaint, many of Twitter’s servers were unable to support encryption of stored data and software and security updates were disabled on more than 30 percent of employee devices, and that none of the employee computers were being backed up, there’s also no law requiring that Twitter’s computers support encryption or regularly download security updates.
Twitter suggested that these concerns were exaggerated, telling CNN that the company has systems to “prevent a device from connecting to sensitive internal systems if it is running outdated software” and “uses automated checks to ensure laptops running outdated software cannot access the production environment.” It’s possible that nearly one-third of employee devices were not updated but also not able to access any sensitive or important systems, and that context certainly does change the seriousness of Mudge’s allegation. It’s still a little surprising, though, that the company would take such a lax approach to security updates.
To confuse matters somewhat, one source also told CNN that the statistics on insecure devices that Mudge used “were derived by a small team that did not properly account for Twitter’s existing security procedures.” The report itself is a little vague on where some of the numbers come from. For instance, Mudge’s assessment that more than half of the company’s employees could access sensitive user data is attributed, in the complaint, to “expert quantification and analysis.” On the devices question, both Twitter and Mudge need to tell us more.
Twitter and the Feds
On their own, the revelations about device security and employee access to data would be embarrassing for the company but unlikely to land them in serious trouble—especially if there do turn out to be some mitigating circumstances. However, this is not the first (or even the second) time that Twitter’s security practices have come under scrutiny. In 2011, the FTC reached a settlement with the company after finding that Twitter had failed to safeguard users’ personal information. As part of the settlement, Twitter agreed to establish “a comprehensive information security program that is reasonably designed to protect the security, privacy, confidentiality, and integrity of nonpublic consumer information” and also agreed that it would “not misrepresent in any manner, expressly or by implication, the extent to which [Twitter] maintains and protects the security, privacy, confidentiality, or integrity” of user data. That settlement gives the FTC the opportunity to pursue much more significant penalties against Twitter if it finds the company has violated the terms. And indeed, the whistleblower complaint states outright that Mudge’s review of Twitter’s security found that “Twitter had never been in compliance with the 2011 FTC Consent Order, and was not on track to ever achieve full compliance.”
Companies suffer embarrassing cybersecurity breaches, like the July 2020 Twitter hack, that they could—and should—have been able to prevent all the time. They rarely suffer more than a slap on the wrist for those breaches, but one way to encounter legitimately steep penalties—like the FTC’s $5 billion fine of Facebook in 2019 when the company violated its previous 2012 settlement with the agency—is to not do the things they specifically promised regulators they would do in the wake of those incidents. Arguably the most disturbing thing about Mudge’s complaint is how clearly it makes the case that Twitter has learned nothing from a decade-and-a-half-long history of security screwups and regulatory investigations.
For instance, Twitter was fined $150 million earlier this year for violating the 2011 FTC settlement by using email address and phone numbers that it told users it was collecting for security purposes to target ads. According to Mudge’s complaint, even as that investigation was being conducted and Twitter was negotiating with the FTC, the company was continuing to use data for ad targeting that it couldn’t be sure it was actually allowed to use. Mudge reports that one Twitter executive said of the incident: “So we only started to address the problem, and then got side tracked and forgot about it? We do that for everything.”
Indeed, a failure to follow through on even the most basic security promises—encrypting data, deleting the data of users who deactivate their accounts, implementing a secure software development lifecycle process, limiting how many employees have access to sensitive user data—seems to be the defining characteristic of Twitter’s approach to cybersecurity as described by Mudge. It’s the kind of lackadaisical, irresponsible attitude you might expect from a new startup with only a handful of employees and no security budget, but certainly not from a large, publicly traded tech company with a dedicated security team and several FTC investigations under its belt during which it had explicitly promised to implement many of these safeguards.
Of course, the language of the FTC settlement is vague—people can and do argue about what constitutes “a comprehensive information security program”—and Twitter often appears to have stopped just short of lying outright about its security. For instance, according to Mudge’s complaint, the company claimed that it was making progress toward implementing secure software development processes rather than saying it had actually done so, and telling the FTC it had “deactivated” deleted user accounts rather than promising it had fully deleted the data associated with those accounts. But all the same, the complaint paints a damning picture of a company.
Everything Else
Overall, Twitter comes off as a company that’s learned precious little from past failures. Rather than rolling back employee access to user data, it was apparently expanding that access. Rather than vetting its employees more carefully for potential spies after a former employee was charged with providing information to the Saudi government, it was—in perhaps the most explosive charge in the whistleblower complaint—allegedly agreeing to hire agents of the Indian government.
There are other allegations in the complaint as well, ones that have fewer implications for the company’s security but may still have significant consequences. For instance, a long section of the complaint focuses on Twitter’s failure to take more aggressive actions to block and remove bot accounts, alleging that the company misled Elon Musk about how many bots were on the site. (Musk’s lawyers, preparing for a court battle this fall to get out of a deal to purchase Twitter, were undoubtedly pleased to see that one.) Another potentially major claim raised by Mudge is that the company is using machine-learning models and data sets without the proper licenses. According to the complaint, “Litigation by the true owners of the relevant IP could force Twitter to pay massive monetary damages, and/or obtain an injunction putting an end to Twitter’s entire Responsible Machine Learning program and all products derived from it.”
Presumably, there are even more—and even more serious—allegations described in the full, unredacted complaint. But even the redacted version offers a clear picture of how little Twitter cares about protecting user data and how unwilling it has been to change following past security breaches or its encounters with regulators. It’s a demoralizing read not just because it’s a reminder of how poor cybersecurity still is at many companies, but also because it’s a reminder that even in the relatively small number of cases where the FTC investigates data privacy and security concerns and the regulatory enforcement mechanisms for holding companies accountable for their security failings actually kick in, there’s still no guarantee that any of that oversight will translate into stronger cybersecurity.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.