In February, the Australian Competition and Consumer Commission announced a government probe into Facebook and Google. Among the issues to be investigated? A potential “consumer protection concern” over the role their personalization algorithms play in polarizing the news we access. After all, if the invisible hand of internet equations—customized to deliver content we “like” in our search results and newsfeeds—only feeds us news that plays to our political biases, it’s understandable to fear that online algorithms are destroying democracy.
But it’s not necessarily accurate. There’s no question that the U.S. is experiencing a rise in polarization across many measures, including the vitriol with which we view opposing parties—and that this fragmentation manifests in our partisan media diets. Journalists themselves aren’t exempt: MIT analysis of Twitter data suggested that most media professionals were largely disconnected from Trump supporters on the platform in 2016.
That said, the actual interactions between human choices and algorithms shaping our partisan diets are less straightforward. A 2015 Science study by Facebook researchers found that our own clicks and friend choices were primarily responsible for limiting our exposure to “cross-cutting” content—that is, content with political perspectives across the aisle—with algorithms playing a minor role. Only 24 and 35 percent of content shared by liberals’ and conservatives’ friends respectively was cross-cutting; newsfeed algorithms filtered that slightly to display 22 and 34 percent; and the final percent of cross-cutting content users clicked on was 21 and 30 percent. (The study methodology is a bit complicated, but it’s explained well in this Wired piece.)
More recently, a 2016 study found that social networks and web search engines were indeed associated with a modest increase in ideological distance between individuals—but they also resulted in more individual exposure to cross-cutting media. And just last year, Stanford researchers found suggestions that the greatest polarization was driven by demographics that spent the least time online.
What’s more, emerging evidence suggests that personalization algorithms may hold key insights to creating commonality, and even break down filter bubbles—at least under certain circumstances.
In 2014, Kartik Hosanagar, Daniel Fleder, and Andreas Buja at University of Pennsylvania, and Dokyun Lee at Carnegie Mellon University published the first empirical study with treatment and control groups on whether recommendation services led to expanded commonality or polarization in the music industry.* Analyzing data from more than 1,700 iTunes users from January to July 2007, they investigated the impact of an online music service that presented users with personalized music recommendations. As users listened to songs on iTunes, the service displayed recommendations that they could sample and purchase in a side-window. It also showed them the play histories of other users with similar interests. The treatment group included users who had registered for this service in March, enabling a two-month period of pre- and post-data collection, while the control group consisted of users who registered after May.
Because the researchers could not randomize the groups, they used a matching technique to pair users in the treatment and control groups with similar behavior (for example, the date they installed iTunes, their library sizes, average monthly downloads, among other variables); they then ran sensitivity analyses that confirmed the treatment and control groups were comparable. Finally, they evaluated the resulting user overlap in media consumption. If personalization led to polarization, they expected to see a decline.
To their surprise, the percent of users who consumed at least one song in common doubled from 23 to 46, even as the number of items they shared shot up. What’s more, they found overlap increasing both within and across groups. It wasn’t just that Bach fans were burrowing into increasingly obscure harpsichord concertos—they were also consuming more unique artists across genre.
When Hosanagar and I discussed what was going on, he attributed it to two mechanisms. Firstly, users were simply consuming more content overall. Where two people might have listened to six songs on their own, they were now listening to 12 with recommendations, increasing the odds that they consumed at least one song in common. But independently of volume increase, there was also a jump in exploration: The algorithm encouraged users to discover new songs, even nudging them across genres—with the result that personalized users had both more commonality and diversity in their consumption.
It was discussing the subset of users who jumped genres that revealed one of the most intriguing insights about how personalization might help us break out of our filter bubbles.
“In almost all instances in which we observe that there were people who were going into new genres, it almost always happened through these ‘pathway’ or ‘bridge’ artists,” Hosanagar observed. “There has to be something in common.”
What these users had in common was Elvis. Or if not Elvis, someone like him. Boundary-spanning artists like the King of Rock triggered recommendations for gospel, rock ’n’ roll, and country. You might be rocking to the tunes of “Blue Suede Shoes,” and a recommendation could pop up for his gospel album How Great Thou Art. “If it’s too far from my viewpoint, I might not respond to it,” Hosanagar pointed out. “It’s not the big changes—it’s all these incremental nudges that can get us to a new place.”
That idea is exactly what Kiran Garimella and his colleagues at Aalto University and the Qatar Computing Research Institute have been experimenting with in the political domain. They’ve developed an algorithm for partisan Twitter users that identifies news articles across the aisle that they might be most receptive to.
“It’s more complex than just recommending something from the other side,” Garimella explained. “You also take into account what the person is interested in, and if they’re extremely biased on the very end of the political spectrum.”
Simply exposing people to information they disagree with is often ineffective for meaningful engagement. We generally dislike information that contradicts our worldviews, and we’re remarkably good at interpreting it through self-serving biases. But if our distaste for difference leads us to insulate within tribes, then perhaps one way forward is to lead with similarity—or, at least, more palatable combinations of difference and similarity.
Garimella and his colleagues ran a preliminary test of their algorithm on Twitter data in the wake of the 2016 U.S. election. They analyzed user interactions, connections, keywords, and hashtags to map out their filter bubbles, and then quantified polarity scores for each user on a scale of -1 to 1 (with -1 being very conservative, 1 being very liberal, and zero being neutral). The algorithm identified news articles across party lines for each of the users—but not too far from their own polarity score—that were generally popular on Twitter, and also relevant to their topic interests based on account and tweet data.
For example, one user identified as “Christian,” “Unapologetic @POTUS Trump Supporter,” and “Snowflake Hater” on his profile and tweeted about immigration (polarity score: -0.99). The algorithm floated forth an Atlantic article tweeted by a left-of-center Episcopalian (polarity score: +0.15): “What Conservatives Get Wrong About Trump’s Immigration Order.”
“One thing we realized, especially after doing these experiments on Twitter, is that this is definitely not a computer science problem in itself,” Garimella confessed. “This is really an interdisciplinary problem. You need a psychologist, you need a social scientist to understand how people behave. … In my next step, we are trying to collaborate with some psychologists at the university to do lab experiments before we get started on real world large-scale experiments.”
It’s still too early to tell whether these kinds of algorithms will work within our polarized climate. While the 2014 study demonstrated their potential to bridge diverse users, the divide between rock-and-roll and gospel groupies is almost definitely less antagonistic than our political tribes. When ideological threat is high and common ground low, tailoring to individual tastes may be less likely to cultivate a global village that has us all singing “Kumbaya” (or “Blue Suede Shoes”).
But a number of researchers are increasingly incorporating social science insights into tech solutions. Eduardo Graells-Garrido and Yahoo Lab researchers have explored the potential to connect Chilean Twitter users divided on politics through common interests like sports and movies. Just this March, MIT researchers released a paper on a social mirror feature showing Twitter users the political makeup of their networks and their position within it. All of these capitalize on the social dynamics already embedded within our online networks and personalization algorithms—a reminder that the very same social components underlying our filter bubbles may also be their solution.
After all, personalization was never intended to create the kinds of foxholes that we now fear. The first major uses of recommendations were in e-commerce platforms like Amazon and Netflix, where they were developed as a discovery tool, enabling consumers to explore—and expand—their consumption. In fact, a 2003 Amazon industry report dismissed recommendations that were “too narrow,” because the goal of personalization was to “help a customer find and discover new, relevant, and interesting items.” Algorithms acted like those best of gift-givers, friends who are familiar enough with your tastes to know what you’ll like, but also creative and different enough to help you discover what you never knew you liked.
The past several years have shown us that personalization—like the internet itself—hasn’t always lived up to its potential for greater discovery and connectivity. Our own behavior and biases are partly to blame, as are the internet companies that simultaneously reward viral and insular media content. But with more critical attention nudging us toward an update, perhaps algorithms and social media can again be our friends—or at least, recommend those across the aisle with just the right dose in common to be our “bridge” friends.
*Correction, April 5, 2018: This article originally misspelled Kartik Hosanagar’s last name.