Wednesday’s Twitter hack was the worst in the company’s history, striking at the heart of the platform’s integrity. It could have been much, much worse.
In the incident, a hacker—or hackers—was able to commandeer dozens of the platform’s highest-profile accounts and advertise a cryptocurrency scam. Over the course of the afternoon, it appeared as if celebrities like Bill Gates, Elon Musk, Kanye West, Jeff Bezos, Joe Biden, and Barack Obama were offering $2,000 to anyone who would send $1,000 in Bitcoin to a virtual wallet. Corporate accounts for Uber and Apple were also compromised. Preliminary reports suggest that the hackers were somehow able to convince a Twitter employee to provide access to an internal tool that could hijack accounts. Many details of the hack, though, are still unknown to the public.
Commentators quickly noted that someone with control over the most powerful accounts on Twitter could have done far more damage. We’re lucky that all these scammers seem to have done was perpetrate the kind of double-returns scam that Runescape players used to pull on one another. (The scheme only ended up netting $120,000.) “That’s a huge question here. If you had the keys to the kingdom, would you really just order ice cream?” says Daniel Miessler, a security consultant and host of the Unsupervised Learning podcast. “You could make it appear that two world leaders are escalating rhetoric to a dangerous level.”
And as Bloomberg points out, Twitter had a market-shifting level of sway over investors even during its early years. In 2013, hackers were able to erase about $136 billion from the S&P 500 index by breaking into the Associated Press’ account and falsely reporting that explosions at the White House had injured then-President Barack Obama. Elon Musk has also swung stock prices for Tesla by sending out ill-advised tweets on a number of occasions, most infamously when he falsely claimed in 2018 to have “funding secured” to take his car company private for $420 per share. With control over these influential accounts, the hackers could have, for example, sent out an announcement that Bezos is stepping down as Amazon’s CEO or that Apple is cutting ties with Chinese manufacturers to tip stock prices.
Given the immense havoc that a more canny or ruthless hacker could have wrought with these privileges, some have speculated that the Bitcoin scam may have been a smoke screen for something more nefarious. For one thing, powerful accounts likely had valuable information sitting in their direct messages, which the hackers may have pilfered for future reference. The hackers could have also tried to install a backdoor that would give them continual access or tap in to another similarly powerful tool. “An attacker who understands the power of what they have done would probably not just settle for Bitcoin mining. They would probably try to take advantage of an administrator’s credentials to move laterally through different systems inside of an organization,” says Katie Moussouris, a fellow at New America and the CEO of the cybersecurity firm Luta Security. “There’s no evidence that has occurred, but that’s what a sophisticated attacker would do next.”
It’s undoubtedly risky for a company to have a tool that can access this number of accounts in the first place. Yet “god mode” is often necessary for a platform, especially in the early days of its existence when administrators need to continually test certain functions from the points of view of various different users. “Twitter was designed to share recipes and what you just had for lunch. These tools make sense for a platform like that,” says Miessler. “But now you’re talking about what these leaders and heads of state are doing.” Once a tool like this comes into existence, however, it becomes integral to regular business operations and thus extremely difficult to eliminate. Typically, developers would put stricter and stricter controls on such a tool as the company grows.
There are certainly ways to mitigate risk by keeping a “god mode” tool under extremely strict protections so that even a rogue employee would have a hard time abusing it. It’s unclear what measures Twitter had in place, though one good practice is for use of a tool with this much power to require at least two employees. “Think of it like the nuclear codes and the launch keys,” says Moussouris. “There has to be two people in order to turn the keys.” Companies can also try to curb the power that a certain tool has—for example, by making it such that it only has access to a certain group of accounts or a limited number of controls within an account. On a macro level, companies can also attempt to analyze high-profile accounts and determine what a baseline level of activity looks like in order to monitor them for anomalous behavior, which could help to catch on to problems more quickly. And, as Moussouris also points out, there’s always the option to shut the whole platform down in the event that a powerful tool does become compromised. In this case, a systemwide lockdown wasn’t necessary—only verified users had their posting privileges disabled while Twitter scrambled to contain the damage. Verdict: not a full Fail Whale, but nearly one.