Future Tense

The Long History of Computer Science and Psychology Comes Into View

Understanding that legacy can help us stop the next Cambridge Analytica.

For decades, computer scientists interested in human-computer interaction have viewed the brain as a machine.
For decades, computer scientists interested in human-computer interaction have viewed the brain as a machine.

The truth is finally out about Cambridge Analytic. In a series of eye-popping articles for the Guardian, Carole Cadwalladr and her colleagues have detailed the full story of how research linking Facebook demographic data to personality traits apparently ended up in the hands of Steve Bannon, Robert Mercer, and the Donald Trump campaign.

I’ve been staggered by the sheer detail of these articles, but totally unsurprised by their content: I’ve been arguing for years that the integration of digital media devices and psychological techniques is one of the most underappreciated developments in the history of computing. For more than 50 years, this has been the domain of computer scientists who have approached the brain as a “human processor,” just another a machine to be tinkered with. The work has taken place almost entirely in the domain of computer science, with little input from clinical psychologists, ethicists, or other academic fields interested in the messy details of human social life. Understanding that shortsighted perspective, and how it gave rise to companies like Cambridge Analytica, can help us curtail the weaponziation of social media today.

Psychological models shaped the development of computers from the very beginning. Kurt Lewin, one of the founders of social psychology, was a participant in the 1946 Macy Conference, a now-legendary gathering of computer scientists and scholars interested in human behavior that helped birth both cybernetics and systems theory. This combination of psychology, systems analysis, and computer science became a hallmark of other Cold-War era research institutes like the RAND Corp. and the Stanford Research Institute. Much of this research was tied to the defense establishment and their large institutional mainframes. The idea of an individual user interacting at their own discretion with newly personal computers attracted only a few visionary designers like Douglas Engelbart, the inventor of the computer mouse.

Psychology’s insights into the complexities of the human mind both troubled and fascinated computer scientists in the 1950s and 1960s. In 1966 AI pioneer Joseph Weizenbaum modeled ELIZA, one of the first chat bots, after famed talk therapist Carl Rogers. Weizenbaum set out to demonstrate how superficial communications between humans and machines were at the time, and was surprised when scores of people ascribed the program with intelligence. Also in the 1960s, computer scientist Hilary Putnam developed the idea of the “computational theory of mind,” which understood the brain as a computing machine and helped shape the field of cognitive psychology around thinking of brains as “information processors.”

It was this development—metaphorically understanding brains as computers—that really began to knit psychology and computer science together in the field of human-computer interaction. A critical moment came with the 1983 publication of The Psychology of Human Computer Interaction by three scientists working for the Xerox Corp.’s Palo Alto Research Center—Stuart K. Card, Thomas P. Moran, and Allen Newell. Together, they made up the Applied Information-Processing Psychology Project at PARC, which had an outsized impact on a wide range of developments in personal computing between the 1970s and 1990s. The three brought a wealth of experience: Card was a psychologist by training, Moran a human factors engineer, and Newell a mathematician, game theorist, and artificial intelligence researcher. Card had been developing ways to apply cognitive psychology to human-computer interaction since his arrival at PARC; in the spring of 1980, he and Moran ran a graduate seminar at Stanford’s Computer Science department on the subject, open to both engineers and psychologists.

The kind of communication with machines envisioned by the PARC authors was based on understanding the human being as a functional analogue to the computer. The goal of the authors was to “integrate all the units of the human processor to do useful tasks.” These tasks could be processed through the collection of human data: about physiological response rates, movement dynamics, and other processes amenable to the digital languages of computing. The authors illustrated their idea of the “model human processor” with a bald, smiling homunculus staring happily at a computer screen.

Card and his co-authors had great ambitions for human-computer interaction as a new way to shape our behavior. They called it “an applied psychology” grounded in understanding a human and computer as one single unit through numerical tracking, task analysis, and calculability. In traditional experimental psychology, the authors complained, “measurements come to have little value in themselves as a continually growing body of useful quantitative knowledge.” Human-computer interaction, in contrast, would collect data about the human body indiscriminately, and put all sorts of measurement about the human information processor to use.

Card, Moran, and Newell were interested in collecting whatever data they could about human computer users—and they thought that computer scientists, not psychologists, should be the ones applying psychological techniques with digital systems. Systems designers were “engaged in a sort of psychological civil engineering,” they wrote, using human abilities as one factor among many to create an efficiently operating system.

The Psychology of Human Computer Interaction was highly influential. Crucially, it pushed studies of how humans interact with computers firmly into computer science for almost 20 years, detaching psychological techniques from their human context and prompting psychologists themselves to play catch-up. Branches of psychology already dealing with evaluations by number, like psychometricians, found human-computer interaction research especially amenable to their experiments. For instance, British data scientist and psychologist Michal Kosinski claimed in a 2013 paper that publicly available Facebook likes could be used to determine a user’s race, sex, sexuality, and even personality traits. Last year, Kosinski published a paper suggesting that his AI system could determine the sexual orientation of faces more accurately than a human being.

Happily, human-computer interaction has changed over the past decade to include a more diverse set of methods and disciplines, including insights from designers, historians, anthropologists, and sociologists. Unfortunately, social media platforms were already up and running under the auspices of computer science as “psychological civil engineering,” via digital means without much input from the social sciences. The humble online quiz, a way for individuals to while away time ever since Spark.com’s famous Purity Test from the 1990s became a vector to collect personality data. Kosinski’s 2013 researcher relied on a personality test circulated via Facebook—a copy of which was used by Cambridge Analytica. And with Facebook and Twitter performing nearly constant behavioral experiments to test ways their users could be nudged into spending more time on their sites, the amounts of behavioral and psychological data collected by our digital devices is only getting bigger. “A smart phone,” Kosinski told the media after the 2016 election, “is a vast psychological questionnaire that we are constantly filling out, both consciously and unconsciously.”

As the Cambridge Analytica story shows, there’s a fine line between psychological civil engineering and psychological civil war. The behavioral, demographic, and personal information Facebook and other social media platforms now collect through what I call algorithmic psychometrics has the sensitivity of medical data, and should be treated as such by regulators around the world. In the United States, the Health Insurance Portability and Accountability Act protects medical and other health information—but only if that information is collected and used by organizations like health care providers and insurance plans. Instead, regulators need to rethink health privacy laws by focusing on both how demographic, behavioral, and psychological data is collected and used across all sectors of the digital economy. The long history of psychology’s role in computing means the Cambridge Analytica bombshell makes unfortunate sense—and makes immediate regulation of these forms of data an urgent necessity.

Read more from Slate on Cambridge Analytica.

Luke Stark studies the history of computing, psychology, and artificial intelligence at Dartmouth College and Harvard University.