I won’t watch House of the Dragon, despite its high ratings. The HBO show is a prequel to Game of Thrones, and that series ended so badly I don’t want anything more to do with that fictional world. But maybe A.I. could change my mind?
At South by Southwest earlier this month, Greg Brockman, president of OpenAI, the company that created ChatGPT, said, “Imagine if you could ask your A.I. to make a new ending that goes in a different way.” Why stop there? Could A.I. be the solution to fixing every novel and script someone has a problem with—customizing revisions to make them shorter or longer, less or more violent, more or less “woke”?
The answer is no. Even if A.I. could make changes to movies and books that you personally find dissatisfying, part of those works’ value lies in the shared conversations they inspire—conversations that require opinions about common, historically situated texts.
The newest generation of artificial intelligence products has inspired waves of excitement and funding since this past fall, when generative-A.I. apps like ChatGPT and DALL-E 2 debuted. But there’s a reason that many of the use cases the technology’s boosters have suggested feel like fixes in search of problems. This is solutionism.
A term coined by the technology critic Evgeny Morozov, technological solutionism is the mistaken belief that we can make great progress on alleviating complex dilemmas, if not remedy them entirely, by reducing their core issues to simpler engineering problems. It is seductive for three reasons. First, it’s psychologically reassuring. It feels good to believe that in a complicated world, tough challenges can be met easily and straightforwardly. Second, technological solutionism is financially enticing. It promises an affordable, if not cheap, silver bullet in a world with limited resources for tackling many pressing problems. Third, technological solutionism reinforces optimism about innovation—particularly the technocratic idea that engineering approaches to problem-solving are more effective than alternatives that have social and political dimensions.
But if it sounds too good to be true—a new ending to a bad show!—we know that it probably is. Solutionism doesn’t work because it misrepresents problems and misunderstands why they arise. Solutionists make these mistakes because they disregard or downplay critical information, which often is about context. To get that information, you need to hear from people who have the relevant knowledge and experience.
Solutionism is a crucial component of how Big Tech sells its visions of innovation to the public and investors. When Facebook became Meta and started advertising its approach to virtual reality, it aired an expensive, inadvertently depressing Super Bowl commercial that conveyed the message that physical reality is broken and the solution to everything that ails us can be found in virtual alternatives. The message sank in, and today we have law enforcement suggesting that the metaverse is “an online solution to the law enforcement recruitment problem” because it will allow potential recruits to have immersive experiences like driving police vehicles and solving cases. At the same time, Meta is making moves that reveal its solutionism as a PR strategy. Although the company was bullish on Horizon Workrooms, a metaverse product that allows teams to collaborate in VR, Mark Zuckerberg has done an about-face on allowing Meta employees to continue to do their jobs remotely. He’s also signaling a shift in company priorities from the metaverse to A.I.
Now solutionism is part of the current artificial intelligence hype cycle. While there’s no doubt that new A.I. products will significantly affect how we work, socialize, and play, we’re also drowning in a sea of hyperbole. So much so that the Federal Trade Commission has gotten involved and warned that the agency is concerned about companies exaggerating what their A.I. products can do. “AI hype is playing out today across many products, from toys to cars to chatbots and a lot of things in between,” the agency writes.
Consider the following tweet from Sam Altman, the CEO of OpenAI:
Thankfully, there are many promising developments in A.I. for medical research and diagnostics. But the idea that A.I. medical advisers will be a boon to people who “can’t afford care” is unrealistic at worst and overstated at best. To consider the best version of Altman’s statement, let’s say that at some point A.I. medical advisers will consistently offer high-quality advice in some health-care domains. Even then, there are good reasons to be skeptical.
First, according to Benjamin Mazer, an assistant professor of pathology at Johns Hopkins University, advances in A.I. are poised to lead to greater health-care costs. As A.I. becomes more advanced and capable of conducting aspects of physical exams and can rigorously analyze medical histories and symptoms, he expects the U.S. medical industry to itemize each inquiry and analysis as a billable “fee for service.” Beyond turning A.I. into a diagnostic moneymaker, Mazer anticipates “cascades of care” to emerge as A.I. systems recommend an increasing array of expensive tests and procedures, including “unnecessary, alarming, and sometimes harmful ones.” In short, advances in medical automation could very well lead to creeping medical costs.
Second, what good is medical advice if you can’t act on it? Beyond the expense of testing and procedures, medicine, many therapies (like physical therapy), healthy eating, and other typical responses for addressing medical issues are all costly—the very thing that’s a problem for people who find medical care unaffordable. It’s unclear how a chatbot makes any of these things cheaper. Third, even in cases where A.I. advice doesn’t cost much to implement, we should expect results to vary. For example, though some people find A.I.-based cognitive behavioral therapy helpful, others prefer working with an empathetic human therapist who cares (as opposed to an A.I. that can only simulate it) and helps keep you accountable for meeting your goals. In this therapeutic context, some people with lesser means could be relegated to undesirable bot treatment, while those who are better off financially can get their preferred human consultations. Those who are stuck will see A.I. as further exacerbating inequality, not solving anything.
To see if Altman is fundamentally thinking about A.I. in solutionist terms or just choosing a bad example, let’s consider another situation that captures the spirit of his proposal. What about low-income people using free or cheaply available A.I. legal advisers instead of paying for human lawyers, who charge a premium for their services? Imagine someone thinking of renting an apartment, taking a job, or hiring someone to perform home repair services and using an A.I. to review the contract—to clarify, in easy-to-understand language, the pros and cons of terms expressed in hard-to-parse legalese. This isn’t an unrealistic possibility. MSNBC just presented a gushing segment on the premiere of the “first-ever A.I. legal assistant.”
I asked Tess Wilkinson-Ryan, a contracts scholar and professor at the University of Pennsylvania Carey Law School, what she thinks. She said, “I am pretty skeptical about the utility of cheap A.I. for the kinds of contracting problems that pose the most serious threats to low-income parties. My general view is that the biggest problems do not have to do with people failing to understand their contracts; they have to do with people lacking the financial resources (access to credit, or cash reserves) to afford the transactions that come with good terms.” The exception, Wilkinson-Ryan notes, is if an A.I. notices that a contract includes unenforceable terms. But even here, she emphasizes, the technology has limited utility. “It might help tenants who are negotiating disputes with their landlords,” she told me, but “it might not be useful for choosing whether or not to rent an apartment.”
But what about Altman’s claim that A.I. will make students “smarter” because they can learn with ChatGPT? While some students are using the technology to make studying easier by asking it to explain material they didn’t grasp in class, the technology is also causing a wave of panic across schools. Teachers are struggling to figure out how to teach and grade now that plagiarism is effortless and hard to detect. Furthermore, it’s challenging to create effective educational policies, not least because chatbots are rapidly evolving, limiting protocols and procedures to short shelf lives. In sum, it’s way too early to know if technologies like ChatGPT will make students smarter because we don’t know what technology-driven methods of instruction and teaching will emerge as adaptative measures.
But at least Altman is correct that we’ll be more productive, right? Not so fast. The idea of enhancing productivity can be misleading because many people assume that gains in efficiency translate into less and better work. Social media is filled with people fawning over how ChatGPT’s latest update, using OpenAI’s new GPT-4 large multimodal model, is a fantastic productivity hack for quickly automating tasks that used to be time-consuming. What they’re missing is that their gains will be short-lived. Compared with previous workflows, their early-adopter prompt engineering is saving time. But once everyone catches up and experiences the same benefits from automation, the standards for doing good work in competitive environments will rise. Washington University professor Ian Bogost thus characterizes OpenAI as “adopting a classic model of solutionism” and notes that tools like ChatGPT “will impose new regimes of labor and management atop the labor required to carry out the supposedly labor-saving effort.”
To get the most out of A.I., we need to be clear-eyed about how its use will impact society. Exaggerating its upsides is detrimental to this goal. And solutionism is one of the worst forms of overstatement.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.