Future Tense

What Mark Zuckerberg Knew and When He Knew It

It took the founder weeks to address Facebook’s latest scandal, but his hands are all over it.

Zuckerberg in a hoodie smiling outdoors in Idaho in July.
Zuckerberg failed to step in. Kevin Dietsch/Getty Images

In recent weeks, Mark Zuckerberg’s presence on Facebook has mostly involved him talking about the metaverse and engaging in watersports. But on Tuesday night, the founder and CEO finally addressed the scandal that has engulfed his company over the last month after one of his former employees turned over tens of thousands of unsettling internal documents to the press, federal regulators, and Congress. That whistleblower, Frances Haugen, appeared before the Senate Commerce Committee on Tuesday to speak about revelations that Facebook had conducted research showing that its Instagram subsidiary makes mental health and body image issues worse for some young users, particularly teen girls, and more broadly that its algorithms promote divisive and sensationalist content.

Advertisement

Haugen’s grand unifying theory tying together all of the insider information she’s released to the public is that Facebook consistently prioritizes profits over its users. Facebook’s financial incentives to increase growth and engagement, according to her, are at odds with people’s safety and well-being. Whenever the company comes to such a crossroads, it usually goes down the path that helps its bottom line. “I believe Facebook’s products harm children, stoke division, and weaken our democracy,” Haugen said in her opening statement. “The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes, because they have put their astronomical profits before people.”

Advertisement
Advertisement
Advertisement

Zuckerberg took issue with this characterization, arguing in a Facebook post after the hearing that many of Haugen’s claims “don’t make sense,” and generally pointing to the projects his company has undertaken to improve safety, transparency, and research into its platforms’ effects on people. He also took aim at Haugen’s central thesis—that Facebook puts profits before people—and wrote, “The argument that we deliberately push content that makes people angry for profit is deeply illogical. … And I don’t know any tech company that sets out to build products that make people angry or depressed.”

Advertisement
Advertisement

This defense is telling, because it sidesteps some of the allegations Haugen is making and speaks to how the Wall Street Journal portrayed his involvement in this scandal in its bombshell series on the leaks. Indeed, while Zuckerberg has generally been quiet on the leaks as part of a reported plan to avoid negative press, he appears to personally have made some crucial decisions over the past few years that have led to these problems bubbling up. Based on the publicly available information, it’s true that he didn’t “deliberately” try to push harmful content for profit, and he didn’t “set out” to build products that elicit negative emotions. Instead, Haugen and the Journal portray him as someone who often did have good, if self-serving, motives for decisions and initiatives that didn’t quite turn out the way he intended. Where he appears to have really done wrong is in repeatedly failing to course-correct when there was evidence that his platforms were harming people, out of fear that it would depress user engagement and growth. It’s a sin not of action, but rather inaction.

Advertisement
Advertisement

Perhaps the best example of this dynamic has to do with Zuckerberg’s move in 2018 to reorient Facebook around content from friends and family rather than professional publishers or political parties. His decision was mainly in response to the controversies that Facebook had weathered after the 2016 election, such as the Cambridge Analytica scandal. Zuckerberg’s ostensible goal was to cut down on divisive politics and news content and promote “meaningful social interactions” among close ones. The Journal reports that another reason for the change seems to be that Facebook had been experiencing declines in user engagement—that is, likes and comments and shares on posts—throughout 2017.

Advertisement
Advertisement

Whatever the case, Facebook’s researchers soon realized that this shift was actually deepening divisions. The new algorithmic emphasis on reshares ended up amplifying angry and sensationalizing content. Publishers and European political parties told Facebook that they were seeing negative posts resonate more strongly with users. The company’s researchers subsequently recommended some fixes that might counteract this deleterious effect, but Zuckerberg declined to pursue some proposals out of fear that they would adversely affect user engagement. One crucial potential fix was to reduce an aspect of the new algorithm known as “downstream MSI,” which promoted posts that were likely to receive likes and comments and proliferate on news feeds through reshares. Research indicated that dialing this back could hamper the spread of misinformation, and the change had already been made in Ethiopia and Myanmar, where Facebook was being blamed for inflaming ethnic violence. Zuckerberg opted not to roll out the change more broadly, however.

Advertisement
Advertisement
Advertisement
Advertisement

When it comes to the Haugen leak that has received the most attention, Facebook’s internal research on how Instagram exacerbates mental health issues in teen girls, it also appears that Zuckerberg was aware of the problem but wouldn’t do anything meaningful about it. According to the Journal, Zuckerberg viewed a presentation in 2020 based on internal research indicating that Instagram was making body image issues worse for 1 in 3 teen girls. There was also evidence that the platform was promoting unhealthy eating habits and negative social comparisons among young users. Despite seeing this, he told Congress in 2021 that Instagram “can have positive mental health benefits” when asked about its impacts on young users, and defended the company’s move to create an Instagram for kids under 13 (a project that the company recently paused to get more feedback from parents and experts).

Advertisement
Advertisement

Zuckerberg did approve at least one measure to combat the toxic environment that Instagram creates for many teens, though it was likely ineffective. Facebook and Instagram rolled out a pilot program in 2021 called Project Daisy, which hid like counters on posts on both platforms. A presentation he viewed in 2020 indicated based on testing that this change wouldn’t actually “improve life for teens,” but the Journal reports that senior executives argued to him that it would at least outwardly look good to appear to address the issue. Another fix that researchers did recommend, based on leaked slides from Haugen, was to institute personalized “mindfulness breaks” that would stop teens from falling down unhealthy rabbit holes of comparing themselves with others. This is the kind of change that would seem to improve well-being, but also literally stop users from engaging. It wasn’t until the end of September, after the Journal released its series, that Instagram announced that it would be rolling out these mindfulness breaks. It’s unclear, however, whether Zuckerberg was involved in that decision.

Advertisement

The wrinkle in this story is Zuckerberg’s approach to anti-vaccine content. Zuckerberg has traditionally been more aggressive on health misinformation, and he announced in March that he was using his platforms as a tool to get people vaccinated. Facebook partnered with health organizations, pushed out reliable information, and was harsher in moderating posts about the coronavirus. Zuckerberg personally appeared alongside the White House’s chief medical adviser, Anthony Fauci, in public appearances. Still, anti-vaccine activists were able to game Facebook’s features. An internal study from the spring indicated that 41 percent of comments on posts related to vaccines were discouraging people from getting vaccinated. The company’s measures to demote the anti-vaccine content that its algorithms were promoting were insufficient, partly because its moderation systems weren’t designed to monitor comments.

Advertisement
Advertisement
Advertisement

Unlike in other cases, in which Zuckerberg personally chose to act or not act in a way that affected users, his battle with anti-vaccine content speaks more to how the machine he built is to some extent out of his control. He’s responsible insofar as Dr. Frankenstein is responsible for the actions of his creation. Except, as Haugen points out, Zuckerberg’s monster has a master switch. As she told Congress on Tuesday, “Mark has built an organization that is very metrics-driven. It is intended to be flat. There is no unilateral responsibility. The metrics make the decision. Unfortunately, that itself is a decision. And in the end, if he is the CEO and the chairman of Facebook, he is responsible for those decisions.”

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement