Has the Internet democratized information? The answer is yes, with an asterisk. While the Web has made information tremendously more accessible, it has also introduced problems of how to classify, rank, and evaluate that information. Truth crushed to earth doesn’t magically rise again. And the information that does percolate to the top isn’t necessarily more accurate than the conventional wisdom of eras past. Rather, it ascends through one or another of the dominating information clusters that I like to think of as information mafias—everything from popular bloggers to journalistic cliques to career reviewers to Reddit moderators, all organized collectives that advance their viewpoints with a variety of underlying agendas, some beneficial and some not.
This is the issue that economists Alex Tabarrok and Tyler Cowen, authors of the blog Marginal Revolution, explore in “The End of Asymmetric Information,” a recent essay in the online journal Cato Unbound. Arguing that the American regulatory apparatus should be sharply reined in because of increased knowledge available to consumers and businesses, Tabarrok and Cowen draw on a classic 1970 microeconomic thought experiment by economist George Akerlof that introduced the term asymmetric information. It’s about used cars. Your average consumer doesn’t know much about cars and so will refuse to pay more than the average going rate for a used car. A dealer knows things about a used car that the buyer doesn’t: how the previous owner treated it, how much maintenance has been performed on it. So the dealer will tend to sell bad cars rather than good ones, since the buyer will tend to refuse to pay extra for good cars. This becomes a vicious cycle, with average quality and price both dropping in the used car market (this is called adverse selection) until the market collapses.
At the time, Akerlof’s paper was a challenge to the overwhelming consensus for what he termed the “perfectly competitive general equilibrium model,” in which competition makes it impossible for any single seller (or buyer) to set or influence the price of a good. Rather, all agents have perfect knowledge and thus make informed decisions based on knowing the actual value of goods, which, even in the absence of regulation, are exchanged in nonexploitative transactions. Akerlof’s paper ignited pushback against that model, suggesting that imperfect and asymmetric knowledge could be decisive structural factors in markets. As economist Joseph Stiglitz put it: Economists “knew that information wasn’t perfect, but hoped that a world with moderate imperfections of information would be akin to a world with perfect information. We showed that this notion was ill-founded: even small imperfections of information could have profound effects on how the economy behaved.”
Now Tabarrok and Cowen are pushing back against asymmetric-knowledge models as well as the regulations and consumer protections that those models frequently justify. As one would expect from an essay published by the libertarian Cato Institute, Tabarrok and Cowen paint a vision of a rosy, deregulated future in which rational individuals make informed choices and Adam Smith’s invisible hand gives us all the thumbs up. They begin:
Technological developments are giving everyone who wants it access to the very best information when it comes to product quality, worker performance, matches to friends and partners, and the nature of financial transactions, among many other areas.
What they fail to observe is that that “the very best information” is not 100 percent pure but cut with inferior data ranging from unreliable accounts to deceitful garbage. The problem is not even noisy signals per se, but too many signals. I’m an information junkie, but the greater portion of my effort comes from screening out bad information (obfuscatory journalism, subtly skewed research, 90 percent of my Twitter feed) rather than taking in good (and not necessarily the very best) information. Tabarrok and Cowen’s claims should be read through the lens that most information is not as good as they want it to be, even if their question—does the increase in available information result in a decrease in information asymmetry?—is worth asking. They conclude: “A lot of economic theories about asymmetric information, while logically correct, have been rendered empirically obsolete.” But have they?
No, not really. At Medium, Slate contributor Adam Elkus points out that Tabarrok and Cowen’s example of the online black market Silk Road as an unregulated marketplace of free information was in fact nothing of the sort: “It was not a bottom-up, decentralized reputation system but one man’s fiefdom,” he writes, alluding to the need for a mastermind Dread Pirate Roberts to oversee the entire marketplace. Tabarrok and Cowen’s other examples of reputation systems, from Uber and Airbnb to Yelp and Amazon, are all unreliable: Filtering accurate information about drivers, hosts, restaurants, and products requires applying an informational filter that presupposes knowledge about how to determine which information is accurate. The asymmetric-information problem doesn’t disappear; it merely regresses. Uber’s ratings are not only subject to the difficulties in determining whether a 4.7 driver is really worse than a 4.8, but also to the internal decisions of Uber, which adds and removes drivers based on its own criteria.
Recently asymmetric information has become a flashpoint in the health care wars. Arguing over Obamacare and health insurance in general, economists and policymakers debate what it means when patients know more about their health than insurance companies, when doctors know more about medicine than their patients (or insurance companies), and when nobody knows what’s wrong with plenty of patients in the first place. Medicine is a terrible microeconomic case study because perfect information is never available: The human body is just too unpredictable and complex. As economist John Quiggin writes in Zombie Economics, “The asymmetry of information is intimately linked to the fact that the benefits of health and education services are hard to predict in advance, or even to verify in retrospect.” Tabarrok and Cowen do little to suggest that 24/7 heart monitoring and genome sequencing will do anything to overcome the need for large-scale risk pooling in health care (they seem to suggest that individuals should pay into a policy exactly what they will get out of it), which provides the underlying justification for Obamacare’s mandate. Their case is so weak on this point that my earlier points don’t even come into consideration. Health care remains a realm of coping with unknowns.
Yet even in less complicated information ecosystems, perfect information can remain stubbornly unobtainable. Quiggin, speaking to me, cited airline pricing as an example: “They want to fill every seat and get each passenger to pay as much as they are willing to do. So they use more and more sophisticated pricing algorithms to detect the people who really need to travel. But of course the passengers have every incentive to represent themselves as price-sensitive holiday-makers who will only accept the lowest possible fare. This arms race never ends.” Past a certain point, a critical chunk of that “perfect information” still remains locked up in people’s private lives and even in their own minds. You would need Big Brother to obtain perfect information, which is precisely the scenario libertarians claim to oppose.
More generally the Internet has caused society to experience a transition from a scarcity of signals to a surfeit of signals. More information means more good information but also more bad information. The amount available on any given transaction can be more than a single person or agent can possibly process; consider Amazon products with thousands of reviews or the government’s determining whether a person should be put on the No Fly List. These processes are incredibly fallible. We are now in a world where there is vastly more to know, yet our cognitive capacities remain what they were in the Stone Age.
Consequently, “perfect information” now requires perfect selection of information. Who does the selection? The information mafia. I say mafia because the signaling and filtering of which information rises to the top is done not rationally, nor democratically, but through the fiat of people who have come into power through various means, few of which relate to the accuracy or quality of their information and filtering. Amazon reviews, cited by Tabarrok and Cowen, are famously gameable and gamed, as evidenced by when I discovered that a fantasy graphic novel containing rape and torture had been recommended for “readers of all ages” by one of Amazon’s top reviewers. Wikipedia’s administrators very much operate in a mafialike fashion (albeit a reasonably benevolent one) under the appearance of the rule of law.
Tabarrok and Cowen would do better to look at their projected endpoint. We don’t need to imagine a wholly deregulated world of information sharing to see the consequences because it already exists, and it’s called Reddit. Reddit is an array of informational fiefdoms colonized in Wild West fashion by whoever got there first. Transparency is nonexistent, making accurate filtering of information quite difficult. The politics subreddit can ban Mother Jones and Zero Hedge, the tech subreddit can ban mentions of Tesla, and moderators can sell influence with no transparency unless they’re dumb enough to get caught. Reddit is what a libertarian information market would really look like, and it is not pretty.
Tabarrok and Cowen might argue that the right incentives (financial and otherwise) don’t yet exist for Reddit to promote the best information instead of unreliable, noisy garbage. But that’s exactly the problem: The anarchic information economy can never consistently guarantee those incentives. It will always be a mass of conflicting, ugly motivators to good and bad behavior both. Simon Owens chronicled Reddit’s haphazard attempts to break up its information mafias, an ad hoc process that seems more art than science. (Just as my critical article on Reddit was banned from several subreddits, Owens’ article was banned from /r/technology. So much for perfect information.) We are so far from perfectly filtered information, we cannot even conceive of how to get there.
Tabarrok and Cowen address the information-mafia problem to a point by introducing the idea of algorithmic arbiters. Here is the scenario they paint: “An artificial intelligence can be trained to evaluate information on behalf of the buyer (or seller). … In a potential buyout, for example, a buyer’s A.I. system might be given access to a corporation’s internal financial reports. It would then report back to the buyer whether the corporation was a good buy at the proposed price, and if necessary the memory of the A.I. could be wiped.” What they neglect to mention is that an A.I. must be trained on something, and its training will slant its assessments in all sorts of directions; A.I.s do not become magically objective but simply reflect the biases present at their creation. We move from regulatory capture to algorithmic capture without solving the underlying problem of neutral arbiters. Tabarrok and Cowen’s handwaving appeal to A.I. is as sloppy a move as that of leftists who advocate the nationalizing of particular markets. Both arguments have an elided step in which some miracle must occur.
Tabarrok and Cowen appeal first to information as a solution, then to artificial intelligence technology. The solution is to what they already believe to be a problem: too much regulation. I take no issue with revoking obsolete regulations, but they fail to show that either increased information quantity or A.I. will put an end to asymmetric information. I’m not saying that there hasn’t been a decrease in information asymmetry since Akerlof’s paper, but I’m not even sure how you would make this measurement. Economists cannot even agree on whether having a 40-inch TV instead of a 23-inch model constitutes an increase in the standard of living, much less whether today’s world has less information asymmetry than it used to. Even you, dear reader, face two competing information mafias in reading this: Cato and Slate. Can you figure out whom to trust?
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.