On Tuesday, the Joe Biden–Kamala Harris campaign debuted a new series of lawn signs, variously emblazoned with the official Biden-Harris logo, the word JOE with a rainbow E at the end, and three pixelated sets of aviator sunglasses, Biden’s signature prop. The release of lawn signs for a presidential campaign would not ordinarily be notable—aside from, at least, the usually undesirable pixelation of the images—but the location of these signs is unique: They are in a video game. These Biden-Harris signs are digital assets for players to proudly display inside the online multiplayer video game Animal Crossing: New Horizons.
This is not the first time Animal Crossing has crossed paths with American politics: In May, Rep. Alexandria Ocasio-Cortez visited the Animal Crossing islands of some of her supporters. But the Biden-Harris sign rollout raises important questions around how companies that create and operate online multiplayer games will wrestle with the abuse of these digital social spaces for less wholesome political ends.
Most people think of social media as the only source for digital social platforms, but countless online multiplayer games fit the bill as well, including Animal Crossing, Fortnite, Minecraft, and League of Legends. These games are forums for communication, expression, and engagement between millions of users every day.
Much has been made of attempts by Facebook, YouTube, Twitter, and other popular social media platforms to grapple with various forms of abuse, such as disinformation, on their platforms. According to Pew Research, nearly three-quarters of Americans have little to no confidence in tech companies’ ability to prevent misuse of their platforms during the 2020 election, despite significant investments by these companies to do exactly that. Game companies just as urgently need to think about the thorny problems of potential misuses of their platforms.
Positive political engagement has been fairly visible in these spaces recently—this year alone, Animal Crossing has seen users participating in in-game Black Lives Matter vigils, anti–animal abuse raids, and rallies in support of protesters in Hong Kong (leading to a ban of the game by the Chinese government). But heinous and potentially political abuses of these digital social spaces could just as easily take hold, and it’s unclear whether the game industry is ready to grapple with these complex content moderation problems.
In July, following the murder of George Floyd by law enforcement, Fortnite convened an in-game panel about racism in America. During the panel, audience members used an in-game mechanic to throw tomatoes at CNN commentator Van Jones as he spoke. Fortnite made no public comment on the incident and no policy change followed. (It’s unclear whether any players were reported or private action was taken.) But Epic Games—the publisher of Fortnite—has a code of conduct and a moderation team that responds to reports of abuse from players based on that code. How are moderation teams in online multiplayer games equipped to understand the intent of tomato throwing, or to differentiate hateful harassment from legitimate protest on their platform? Was it racism? Maybe it was motivated by Jones’ praise for conservatives’ approach to criminal justice reform? It could have been something different altogether, but it is these kinds of thorny and context-dependent content moderation decisions that companies operating online multiplayer games must begin to grapple with.
Another complicated area for digital platform governance, especially in 2020, is the problem of false and misleading medical information. In April, the World Health Organization announced the #PlayApartTogether campaign in collaboration with many major companies across the game industry to, in part, help communicate important COVID-19 medical guidelines to the massive audience of online gamers.
But just as WHO or the Biden-Harris campaign can use these virtual spaces to spread their messages, players or other entities could spread harmful disinformation too. This has been an area of concern for social media platforms as they’ve expanded their circle of advisers for their platform policies and enforcement around this topic to include credible sources of medical information. Games need to consider the impact of these issues and take precautions.
Misrepresenting, impersonating, abusing, stalking, threatening, or harassing any person or company, such as other users, Nintendo, or Nintendo employees, representatives, moderators, or contractors; engaging in or promoting any discriminatory, defamatory, hateful, obscene, physically dangerous, or otherwise illegal, fraudulent or objectionable conduct in connection with the Service.
• upload, post, email, transmit or otherwise make available any content that we deem to be harmful, threatening, abusive, harassing, vulgar, obscene, hateful, or racially, ethnically or otherwise objectionable;
• impersonate any person or entity, or falsely state or otherwise misrepresent yourself or your affiliation with any person or entity.
In the 14 years since then, Facebook has developed separate and specific rules around misrepresentation, hate speech, false news, stalking, harassment, and physical threats, with rules sometimes broken out further into different tiers of escalation. While Facebook is hardly the paragon of good platform governance, the growth of its policies speaks to its experience of seeing behaviors manifest on their platform and finding ways to address them. Over the years, Facebook has crafted new and evermore-expansive rules around specific behaviors to mitigate harms to their users and to society—and to Facebook’s once-sterling reputation.
The same problems that have historically plagued social media are also very much present in online multiplayer games. For example, the Anti-Defamation League’s 2019 nationally representative survey of hate, harassment, and positive social experiences in online multiplayer games found that 13 percent of American adults who play online multiplayer games were exposed to in-game conversations around conspiracy theories about 9/11, 9 percent were exposed to discussions about Holocaust denial, and 8 percent were exposed to disinformation about vaccines. (I am associate director of ADL’s Center for Technology and Society.) If the game industry wants to avoid becoming the hotbed of disinformation and conspiracy theories that social media has become, it needs to act now. It must use the benefits of hindsight and apply the lessons that have been learned from the failures of Big Tech to the rapidly growing and increasingly important digital social spaces of online multiplayer games.
Game companies should be asking their employees: What forms can political disinformation take in our game’s specific online environment? What features can we introduce to discourage it? What measures will we take if it occurs? A good resource for the game industry to start this conversation would be the Brennan Center’s recently released report on digital disinformation and voter suppression, which includes recommendations for internet companies. (Publishers of online multiplayer games, we’re looking at you.)
In 2006, when Facebook first opened its doors to the general public beyond colleges, America could not have predicted the pivotal role the platform would play in society, furthering disinformation-fueled genocide and influencing the outcome of a presidential election. The game industry can protect the unique, important, and powerful communities in Animal Crossing—and across all online multiplayer games—and the promise of the democratizing impact of these spaces by learning from other platforms’ mistakes. The harms of digital social spaces that are so apparent in 2020 must not be the ones the game industry looks back on mournfully in 2030.
Update, Sept. 7, 2020: This article was updated to include information about a Japanese politician canceling plans to campaign in Animal Crossing: New Horizons.