Throughout April, as part of our fourth Futurography course, Future Tense focused on debates about the supposed dangers of artificial intelligence . We published essays and articles by experts and academics on a wide range of topics, but we’re also interested in what you have to say. To that end, we’ve written up some of your answers to our survey on the topic. We hope you’ll follow along as we explore the cultural conversations about drones this month.
Most of the Slate readers who wrote in were unconvinced that A.I. itself presents a direct threat, though many still felt that it has dangerous qualities. Arguing that it’s humans who are the greater risk, one wrote, “I’m worried about people deliberately exploiting A.I. in ways that are detrimental to human prosperity.” Others argued that we should be worrying more about the industries and governments developing the machines that about the machines themselves.
Those who did maintain that A.I. might endanger humans still tended to shy from the premise that it presents a truly existential threat. In accordance with Slate contributors such as Cecilia Tilli, some readers felt that the real problem might be that A.I. will “make many more jobs obsolete, which will severely strain the social fabric.”
To the question of which concerns about A.I. were overblown, an overwhelming number of respondents agreed that fears about actual murderous roots are largely silly. “I’m not sure we will ever get to a true A.I. in the Terminator sense,” one wrote, while another argued, “Evil robots probably won’t exist because there is no reason for them to be programmed that way.” Some went further still, proposing that even the possibility of humanlike intelligence seemed unlikely. “Computers just don’t think like we do from what I understand,” one wrote.
Some readers were skeptical of the idea that A.I.’s interests could ever correspond with our own, despite the work of researchers such as Stuart Russell who are trying to ensure that computers can learn to recognize what’s most important to humans. As one who took such a position put it, “A.I. will likely just be a very, very smart machine and the concept of ‘interests’ might not bear.” Instead, “We should worry more about whether the interests of private and military A.I. R&D teams align with our public interests,” a reader suggested. Another wrote that “terrorist groups and … enemy nation states” presented the greater risk. And others continued to hold that the real trouble is that there’s no such thing as human interest per se, as did one who wrote, “We humans can’t even agree on what constitutes ‘good’ and what constitutes ‘evil.’ ”
Whatever their concerns, readers had a wide range of ideas for future A.I. research priorities. Many suggested that we should look more deeply into medical applications of A.I. such as epidemiological analyses and artificial pancreas technology. Others proposed that we should instead focus on sociological concerns, working to anticipate how A.I. will change the ways we live instead of simply developing the systems that will bring about such changes. Several others echoed a point made by Carissa Véliz in Future Tense, proposing that we need to think more fully about what constitutes consciousness. “Is AI consciousness possible? And how would we know it?” one typical reader asked.
When all was said and done, a few readers had other lingering questions about future developments. “When will Siri not be abjectly terrible?” one asked, while another inquired, “How quickly will Slate employees be replaced by A.I.?” To that we have a question of our own: How do you know we haven’t been replaced already?
This article is part of the artificial intelligence installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on artificial intelligence:
- “What’s the Deal With Artificial Intelligence Killing Humans?”
- “Your Artificial Intelligence Cheat Sheet”
- “Killer Robots on the Battlefield”
- “The Wrong Cognitive Measuring Stick”
- “The Challenge of Determining Whether an A.I. Is Sentient”
- An interview with A.I. expert Stuart Russell
- “Why You Can’t Teach Human Values to Artificial Intelligence”
- “Let Artificial Intelligence Evolve”
- “Mika Model,” a brand-new short story from sci-fi great Paolo Bacigalupi
- “When a Robot Kills, Is It Murder or Product Liability?”
- “The Threats That Artificial Intelligence Researchers Actually Worry About”
- “How Much Do You Know About Killer A.I.? Take Our Quiz.”