Future Tense

Big Tech Needs an Entirely New Business Model

Mark Zuckerberg is on a large screen propped up against the desk where one senator sits.
Facebook CEO Mark Zuckerberg testifies remotely during a Senate commerce committee hearing on Wednesday. Pool/Getty Images

Two decades ago, back when we’d dial up to search Yahoo and chat on AIM, Americans made a bargain. In return for this new information superhighway, as we called it, we would grant profit-seeking platforms control of the spaces where human beings meet, largely ignoring the possible social consequences. The result today—especially in light of Wednesday’s hearings on Section 230 of the Communications Decency Act, which limits platforms’ liability for content posted by users—is a digital media ecosystem whose business models shape the space where ideas circulate.

Advertisement

The state of U.S. politics in the run-up to a momentous election sends us an important signal. Although Joe Biden has a tremendous lead in the national polls, somehow Donald Trump continues to see far more engagement on platforms like Facebook. Something is out of alignment here, but as a society we haven’t quite put our finger on what it is. Wednesday’s hearings provide the perfect illustration: 230 reform was not actually a substantial focus for the Commerce Committee, which instead largely engaged in an unproductive, political back-and-forth sideshow within a week of a presidential election.

Advertisement
Advertisement
Advertisement

Yes, both sides of the aisle are starting to take notice about how broken the internet is: first, a report from the Democrat-led House antitrust subcommittee on Google, Amazon, Facebook and Apple’s excessive monopoly power and, next, the Justice Department’s recent lawsuit against Google. But even these proposals remain locked within a narrow model of how digital platforms should be regulated. Antitrust law cannot deliver solutions to a problem it was not designed to solve: the negative social side effects of platforms’ basic business model.

Advertisement

In a new paper, we call that business model the “consumer internet”—it’s what results when the vast space of online interaction becomes managed principally for profit. It has three sides: data collection on the user to generate behavioral profiles; sophisticated algorithms that curate the content targeted at each user; and the encouragement of engaging, even addictive, content on platforms that holds the user’s attention to the exclusion of rivals. The model is designed to maximize the profitable flow of content across platforms. And it applies to companies across the industry—not just Facebook (where one of us, Dipayan, used to work).

Think what that means when this business model treats all suppliers of content basically the same. As platform operators maximize content traffic by whatever means, disinformation operators and other bad actors, provided they too just want to maximize traffic, can easily align their goals with those of the platforms.

Advertisement
Advertisement

The risks of a computer-based social infrastructure of social connection were predicted as long ago as 1948 by the founder of cybernetics, Norbert Wiener, who wrote: “It has long been clear to me that the modern ultra-rapid computing machine was in principle an ideal central nervous system to an apparatus for automatic control. … [W]e were here in the presence of [a] social potentiality of unheard-of importance for good and for evil.”

Advertisement

Wiener’s unease was ignored in the headlong rush to commercially develop the internet in the ’90s. It’s time now for a regulatory reset to address the consequences. Societies through their regulators and lawmakers must renegotiate the balance of power between the corporate platform and the consumer. The way to do this: a digital realignment.

Advertisement

There are two facets to this approach. First, it requires radical reform of the market behind digital media platforms, enabling consumers to exercise real choice about how data that affects them is gathered, processed, and used, including a real choice to use platforms without data being gathered. At a strike, this will reduce the privacy-undermining impacts of the platforms’ business model. The Obama administration’s Consumer Privacy Bill of Rights offered an initial thrust for new thinking on privacy in the United States—and California’s passage of Consumer Privacy Act, a pared-down version of the European General Data Protection Regulation, now stands as the most stringent privacy law on the books in the country. These are good developments that attempt to replace economic power in the hands of consumers; what we need next is a federal bill—national legislation establishing that users own their data and can tell companies how they can use, analyze, or share it. And luckily, this could be in the offing given the right political circumstances during the next administration.

Advertisement
Advertisement
Advertisement

Second, the government needs to impose much greater transparency on corporations to uncover not just their detailed platform operations but the so far uncontrolled social harms from which they profit. Platforms should have to uncover the full workings of their business models, revealing where they create advantages for bad social actors. From there, platforms should be legally compelled to take urgent action against social harms they discover or that are reported to them—which might include cases of algorithmic discrimination, computational propaganda, or viral hateful conduct. And more broadly, they should cease forms of data collection that corrode broader social values.

Advertisement

We should also adjust platforms’ current blanket immunity from responsibility under Section 230 of the Communications Decency Act, so they are responsible for content dissemination that qualifies as mass communication or breaches civil rights laws. Platforms should not be incentivized to help content go viral or circulate content that is unlawful.

Advertisement

But even in the absence of these much-needed reforms, the government should consider other drastic measures, such as reforms to the theory and enforcement of competition policy for digital firms. Such updates to the law could already be on the table, given the Justice Department’s suit against Google. But in the near-to-medium term, what matters most is addressing the social harms that platforms’ business model are doing right now. This is too urgent to wait for protracted antitrust cases to unfold.

These radical proposals will give societies a chance of rescuing a citizens’ internet from the wreckage of today’s consumer internet. They are as relevant for Europe as North America, even though regulation is more advanced in the former. For the U.S., with a presidential election approaching whose long build-up has been disfigured by toxic content flowing across platforms large and small, these apparently dry, technical issues could hardly be more consequential.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement