What Slate Readers Think About the Power of Algorithms

Talking it out.


Throughout February, Future Tense explored the social and cultural status of algorithms for our second Futurography course. As the month came to a close, we asked for your opinions, and you delivered, offering rich and thoughtful responses to the articles we published and issues we discussed. We’ve written up some of your thoughts below, and we hope that you’ll follow along with similar enthusiasm this month as we look into the debates surrounding the risk of cyberwar.

Respondents to our algorithms survey were split on the central question of the course: Have we given algorithms too much power? Some who believe that we’ve let software go too far alluded to the filter bubble problem—whereby social media presents opinions and ideas that primarily serve to affirm our existing beliefs. Many stressed that this was a question of transparency; one observed, “So much of what goes in server rooms and data centers is so highly automated, we can barely understand it all.” Others worried that our reliance on algorithms speaks to a larger set of issues: “The underlying problem is that we have elevated quantitative analysis over qualitative analysis,” one wrote.

But some stressed the very real potential of algorithms. “[I]t’s normal and desirable for us to expand human capacity through building artifacts,” a respondent wrote. Working along similar lines, several argued that they might helpfully take “human emotion and bias out of decisions,” despite the evidence that some algorithms actually internalize existing prejudices. With such dangers seemingly in mind—others pointed out that we don’t necessarily need fewer algorithms, just a more transparent vantage on the ones we have.

Those who endorsed the role of these programs in our lives discussed a handful of areas that could benefit from further algorithmic automation. “Data analysis of medical outcomes, likelihood of contagious diseases being spread, network packet routing, spam filtering,” a typical list read. Another suggested that they might “make the roads safer” through the introduction of self-driving cars. And at least one took a more mundane track, proposing that we could use an algorithm that shows you “how to organize your closet.”

On the other hand, the skeptical identified a handful of areas that might be better off without algorithmic intervention. Here, the fear that algorithms might take our jobs stood out, but the majority of concerned responses trod more ethereal terrain: “Algorithms should be taboo in artwork, music, and other media intended to express human emotions,” a reader insisted. And several worried over the ways algorithms have intruded into our experience of love—as they do via online dating—though others noted that our brains already work along algorithmic lines when we’re falling for someone.

Almost all readers who wrote in rejected the premise that you can take an absolute position for or against algorithms. Pointing out that an algorithm is “simply a process or formula for solving a complex problem,” one observed that such models are almost as old as civilization itself—if not older. This line of thinking resonated throughout many other responses: “To be for or against algorithms generally makes no sense. They are useful when properly and carefully deployed and potentially dangerous otherwise—you know, like fire,” one concluded.

For all that, questions linger for many readers. One wanted to know more about where algorithms “have failed us in the past” as well as where they’ve been “proven most reliable. Others suggested that they’d like to know more about deep learning, as did one who asked whether it’s just “hype for … bigger and better pattern recognition engines, or … a real breakthrough.” And a few looked to the more distant future, asking how algorithms might change the nature of cognition itself. “Can an algorithm, or a series of them, ever be able to attain true sentience?” read one such response. We’re wondering, too—in fact, Futurography will address artificial intelligence sentience come April.

This article is part of the algorithm installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on algorithms:

Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.