In the run-up to last year’s Italian elections, the country’s senate did not—I repeat: did not—pass a bill giving legislators 134 billion euros “to find a job in case of defeat.” But a satiric story along those lines spread on social media, and not everyone who passed it along understood that it was a spoof. In just one day, 36,000 people signed a petition against the alleged law. Soon it was being invoked at anti-government protests.
Their confusion caught the eye of a quintet of scholars, who were observing how a large sample of Italian Facebook users engaged with different sorts of stories: articles from the mainstream media, articles from alternative outlets, articles from political activists, and fake news crafted by satirists and trolls. In March, MIT’s Technology Review covered the researchers’ work in a piece headlined “Data Mining Reveals How Conspiracy Theories Emerge on Facebook.” The article began with the tale of that imaginary Italian bill and the people who believed it was real, wrapping up the anecdote with the line, “Welcome to the murky world of conspiracy theories.”
This was an odd way to frame the issue. The rumor involved a bill that had supposedly been passed by the legislature, not a secret plan being hatched by some invisible cabal; it was not in any meaningful sense a story about a conspiracy. The larger study was concerned with the transmission of false stories, whether or not they involve conspiracies; the word conspiracy and its variants appear only four times in the paper. Yet the Technology Review piece brushes past this distinction, then compounds the problem by generalizing rather expansively from the research. “Conspiracy theories,” the writer speculates, “seem to come about by a process in which ordinary satirical commentary or obviously false content somehow jumps the credulity barrier. And that seems to happen through groups of people who deliberately expose themselves to alternative sources of news.” Evidently more than one credulity barrier has been breached.
If Technology Review defined the phrase “conspiracy theory” too broadly, other outlets adopt definitions that are too narrow. In 2013, Fairleigh Dickinson University’s PublicMind Poll concluded that 63 percent of America’s registered voters “buy into at least one political conspiracy theory.” The press duly reported that exact-sounding number, though it wasn’t really accurate: What the survey actually found was that 63 percent of voters believed at least one of the four theories featured in the poll. The number who believe in “at least one” conspiracy is surely far higher.
These aren’t the only times researchers or the reporters who cover them have made this sort of mistake. For decades, psychologists and social scientists have been studying conspiracy theories and the people who believe them. They have unearthed a lot of interesting data, and they have sometimes theorized thoughtfully about the results. But they have repeatedly run into a problem: The world they’re studying is not the same size and shape as the world of conspiracy belief.
Conspiracy theories feature a wide range of masterminds. In The United States of Paranoia, my history of paranoid American folklore, I divided those conspirators into five categories. There is the Enemy Outside, an alien force based outside the community’s borders; the Enemy Within, fellow citizens who cannot be easily distinguished from friends; the Enemy Above, plotting at the top of the power structure; the Enemy Below, conspiring in the underclass; and the Benevolent Conspiracy, which isn’t an enemy at all.
Needless to say, this is hardly the only way conspiracy stories can be sorted. And in practice, those five types frequently overlap with one another: The Enemy Outside, for example, might be accused of pulling the Enemy Below’s strings, as when various prominent Americans blamed the Communist bloc for the urban riots of the ’60s. But it’s a useful typology, with plenty of historical examples of each kind.
In these studies, though, Enemy Above stories tend to be overrepresented. And that in turn can skew the results. When researchers draw conclusions about people who are especially prone to seeing conspiracies, they might actually be telling us about people prone to seeing a particular kind of conspiracy.
Sometimes this bias is stated baldly. In 2010, for example, the Rutgers sociologist Ted Goertzel wrote an article for EMBO Reports, a journal of molecular biology, that said conspiracy logic tends to “question everything the ‘establishment’—be it government or scientists—says or does.” He backed this up on the rather thin grounds that a recent pop text, The Rough Guide to Conspiracy Theories, mostly discusses theories about “political, religious, military, diplomatic or economic elites.”
But that “establishment” has conspiracy theories of its own, even if the Rough Guide overlooked them. At moments of moral panic, it is common for the government and the mainstream media to blame a folk devil—frequently cast in conspiratorial terms—for a real or alleged crisis. Examples range from the white slavery panic of a century ago, when a vast international syndicate was believed to be conscripting thousands of girls into sexual service, to the Satanism scare of the 1980s and early ’90s, when politicians, prosecutors, juries, and the press were persuaded that devil-worshipping cabals were molesting and killing children. Often the conspiracy stories believed by relatively powerless people are mirrored by conspiracy stories believed by elites. At the same time that American slaves were afraid that white doctors were plotting to kidnap and dissect them, the planter class was periodically seized by fears of slaves secretly plotting revolution. While the Populist Party was denouncing East Coast banking cabals, many wealthy Easterners were wondering whether a conspiracy was behind Populism.
With that in mind, consider the academic literature on conspiracy believers. In 1992 Goertzel surveyed 348 residents of New Jersey about 10 conspiracy theories that were circulating at the time. Seven of the 10 were Enemy Above theories, in which the government was guilty of murdering Martin Luther King, deliberately spreading AIDS, covering up UFO activity, or otherwise injuring the public interest. Two more—one where a conspiracy killed John F. Kennedy, one where Anita Hill was part of a plot against Clarence Thomas—could take either an Enemy Above form or another shape, depending on the version of the story the person surveyed believed. Only one of the 10 was definitely not an Enemy Above theory: “The Japanese are deliberately conspiring to destroy the American economy.” (That one was, interestingly, one of the most popular items in the list, with 46 percent of respondents declaring it either definitely or probably true.)
This does not mean that Goertzel’s data are useless or that he didn’t produce an interesting paper. But when he writes, say, that conspiratorial beliefs are correlated with anomie and insecurity about unemployment, has he really uncovered a couple of conspiracist traits? Or has he simply been asking about conspiracy theories that people experiencing anomie and economic insecurity are more likely to believe?
Goertzel also noted, “People who believed in one conspiracy were more likely to also believe in others.” This idea has become a staple of the literature: As Michael Wood, Karen Douglas, and Robbie Sutton put it in a 2012 paper for Social Psychological and Personality Sciences, “the most consistent finding in the work on the psychology of conspiracy theories is that belief in a particular theory is strongly predicted by belief in others—even ostensibly unrelated ones.” It has become a staple of pop-science coverage too, appearing in venues ranging from Bloomberg to Newsweek.
Anecdotally speaking, it’s a plausible idea: While everyone is capable of conspiracy thinking, some people do seem more prone to it than others. But are they really more likely to embrace conspiracy theories in general, or just conspiracy theories of a certain sort?
Consider a 2013 paper by the British psychologists Robert Brotherton, Christopher French, and Alan Pickering. The participants in the team’s initial investigation gave their views on 59 conspiratorial claims. The list was deliberately composed to reveal a broad, generic interest in conspiracies rather than an interest in specific events (such as Sept. 11) or specific villains (such as the CIA). It was also wide-ranging enough for the researchers to break down the theories by type: stories about government malfeasance, about extraterrestrial cover-ups, about malevolent global forces, about threats to personal health and liberty, and about efforts to control the flow of information. It is, in short, one of the most thorough efforts around. Even so, the vast majority of the items are clear-cut Enemy Above theories, and the remainder are, with one exception, phrased in such a way that the respondent can insert either an Enemy Above or a different sort of conspiracy into the villain role—for example, “Some of the people thought to be responsible for acts of terrorism were actually set up by those responsible.”
Or consider the study that another two British psychologists, Patrick Leman and Marco Cinnirella, published in Frontiers in Psychology last year. In that one, the respondents’ conspiratorial attitudes were determined by their responses to a Belief in Conspiracy Theories scale. Of the six items on the list that affirmed rather than denied the existence of a conspiracy, five were Enemy Above stories. The other—“The European Union is trying to take control of the United Kingdom”—is an Enemy Outside claim, but its adherents typically believe that British elites are complicit in the conspiracy.
The contents of such lists may explain why these studies sometimes come to drastically different conclusions about conspiracy believers. A 1999 paper, for example, included a wider range of theories in its questionnaire, asking its subjects not just about government plots but about Jewish cabals, terrorist infiltrators, and the Mafia. It found an association between conspiracy theories and authoritarian attitudes. Other researchers, using a different list of theories, found that conspiracy theorists tended toward defiance of authority and strong support for democratic values. Apparently it isn’t easy to generalize about a group as large as “people who believe in conspiracies.”
By now some readers are ready to shout, “BUT WHAT ABOUT CONSPIRACIES THAT ARE REAL?” Some of those readers may have abandoned this article already and gone to write something to that effect in the comment thread, capital letters and all. And it’s a fair point. Some conspiracies are real. The word conspire is in the language for a reason. And that adds further complications to the question of just whom we mean when we talk about conspiracy believers.
Many of these papers, to their credit, do raise this issue, noting that real conspiracies exist and that it is not innately irrational to believe in them. Goertzel’s EMBO article discusses the subject in detail, offering some sensible thoughts on how to distinguish a plausible conspiracy claim from an implausible one. Last year, in a special issue of the PSYPAG Quarterly devoted to the psychology of conspiracy believers, Brotherton wrote an entire article on the question of how to define “conspiracy theory,” noting that we do not typically apply the phrase to, say, the idea that a conspiracy of terrorists led by Osama Bin Laden plotted the 9/11 attacks. A conspiracy theory, Brotherton suggests, is not merely a theory that invokes a conspiracy; it is “an unverified claim of conspiracy which is not the most plausible account of an event or situation, and with sensationalistic subject matter or implications. In addition, the claim will typically postulate unusually sinister and competent conspirators. Finally, the claim is based on weak kinds of evidence, and is epistemically self-insulating against disconfirmation.” This is a much more limited definition than I would offer—and it opens a whole new can of worms about which theories should or shouldn’t be included in a study—but it does have the advantage of establishing what exactly the researchers are investigating.
Still, there are drawbacks to excluding conspiracies that are widely acknowledged to exist. Earlier this year, the Journal of the American Medical Association published a paper that surveyed Americans about several medically themed conspiracy theories, from “The CIA deliberately infected large numbers of African Americans with HIV under the guise of a hepatitis inoculation program” to “Health officials know that cell phones cause cancer but are doing nothing to stop it because large corporations won’t let them.” The researchers concluded that “conspiracism correlates with greater use of alternative medicine and the avoidance of traditional medicine.”
It’s a straightforward, respectable piece of research. Yet I can’t help wondering what would have happened if that list of medical plots had also included these items:
- As part of a series of mind control experiments, the CIA administered LSD to unwitting subjects, a program it continued even after it led to illness and death.
- In a 40-year ruse, the Public Health Service told hundreds of black sharecroppers that it would give them free health care. Rather than inform the patients that they had syphilis, the doctors deliberately left the disease untreated in order to study whether the illness affects blacks and whites in different ways.
- For a decade and a half, scientists used students at a New York school for the developmentally disabled as guinea pigs, deliberately infecting them with hepatitis in hopes of finding ways to combat the sickness.
All three of those tales are true. The first was one of the most explosive revelations in the Senate’s mid-1970s investigation of the CIA. The second is the infamous Tuskegee experiment of 1932–1972, which set off an uproar when it was revealed. The third, which took place from 1956 to 1971 at the Willowbrook State School, is brought up frequently in debates about informed consent: The parents agreed to the experiments, but the kids were in no position to understand what they were getting into.
If those items had been included in the JAMA study, what would the results reveal? Would people aware of real medical misbehavior be more likely to buy into the fictional stories, or would they be grounded in the evidence in a way the other believers are not? Would their beliefs also correlate with an interest in alternative medicine, or would there be a noticeable difference between their behavior and that of the original study’s conspiracy believers? How, in short, does an awareness of real conspiracies affect “conspiracist” ideas?
Just as the Facebook paper reminds us that not every false story involves a conspiracy, this alternate version of the JAMA study would remind us that not every conspiracy story is false. It could reveal a lot in the process. But to get there, you have to change your scope.
This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.