In the immediate aftermath of the U.S. presidential election, Mark Zuckerberg insisted that fake news on Facebook played no role in electing Trump. But Facebook seems to have had a bit of a change of heart. For example, it recently announced the hiring of former TV news journalist Campbell Brown and promised to develop a different approach to editing, curating, and filtering news.
Despite these efforts, some commenters think Facebook needs to do more. One idea that has gained some traction is the suggestion that Facebook and others hire “chief ethics officers”—akin to a chief privacy officer—to inform internal conversations around pressing social and political issues. In USA Today, Don Heider, dean of the School of Communication at Loyola University Chicago, argued that ethics officers could help guide companies through “this complex world where technology and humans collide, [in which] there often are not clear rights and wrongs.”
On the surface, the idea is appealing. Investing resources in high-level ethics positions would signal to users and policymakers that tech companies are taking seriously their role as epicenters of social, political, and economic activity.
But for all its attractiveness, the idea has one major flaw: It won’t work.
This isn’t to downplay the importance of ethics. I spend my waking hours researching and teaching on moral issues in data, tech, and culture, so I can attest to the (sometimes life-or-death) consequences of failing to consider the social and ethical dimensions of technology. On the contrary, it’s because of the importance of ethics that I need to point out two key flaws with this approach.
The first flaw is the implicit assumption that tech companies are currently devoid of people capable of careful and nuanced ethical thinking. Collaborations between technology companies and philosophers, humanists, and ethically minded social scientists are common. For example, Google consults with philosophers on everything from “right to be forgotten” debates to artificial intelligence. Apple has recruited moral philosophers and other educators to train its employees as part of its secretive Apple University. Ethically minded researchers have long had a place at Microsoft. Analytics darling Palantir Technologies specifically employs “civil liberties engineers” to address issues of privacy, transparency, and power within its systems. Twitter draws on the insight and expertise of philosophers, psychologists, activists, and lawyers as part of its Trust and Safety Council.
Facebook, for its part, responded to the 2014 public controversy surrounding the now-infamous emotional contagion experiment by recruiting experts from academia to develop an internal research ethics review process, the details of which were published last year.
Furthermore, individuals and teams of employees already working within these companies regularly challenge their employers on thorny ethical issues. At Facebook, employees have pressed Zuckerberg on issues from hate speech to board member Peter Thiel’s support for Trump (and Thiel’s disdain for, among other things, democracy and a free press). A group of renegade employees even went so far as to take on the problem of disinformation themselves—even while their CEO was still claiming “fake news” didn’t matter.
Despite these institutional and employee-led efforts, problems persist. To list only a few: In 2015, Google’s automated image tagging software set about tagging black people as gorillas while morally disturbing sentiments continue to plague the company’s search results. Despite employing people who know better, engineers at Microsoft still managed to unleash the disastrous and embarrassing “Tay” chatbot on the world. And for all its talk of civil liberties, the CIA-backed Palantir remains complicit in the unfair and pervasive surveillance of Americans by law enforcement and intelligence agencies.
And Twitter? Still the worst.
These examples point to an obvious limitation of the chief ethics officer approach: In the face of massive and sprawling technology companies, one individual (or team or council or department) is not a panacea for all possible ethical problems. Moreover, it treats Silicon Valley companies as monoliths, conveniently ignoring internal dissenters and external advocates already in the trenches, already grappling with key ethical issues.
But if there are already myriad people and initiatives pushing for more ethical systems and platforms, it’s worth asking: Why do awful things keep happening?
That brings us to the second flaw of the chief ethics officer approach: the implicit assumption that internal ethics processes are sufficient to bring about positive change in the face of powerful and countervailing commercial or political incentives.
It’s not a stretch to say that delivering the best technology and achieving commercial success are not always synonymous. Google progenitors Sergey Brin and Larry Page have admitted as much, noting more than a decade ago “that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.”
Indeed, aligning ethical values like privacy with commercial incentives is a perennial challenge. Examining incidents at both Google and Facebook, policy researchers Ira Rubinstein and Nathan Good showed that privacy violations were not always the result of a lack of attention to ethical, policy, or design considerations. Rather, privacy was often sacrificed when it ran counter to the companies’ perceived business interests.
In addition, holding companies accountable for ethics is even harder when leaders fail to see their commercial motives and the ethics of their actions as in conflict. As I’ve argued elsewhere, there is little doubt that Facebook’s Mark Zuckerberg believes that his version of a “more open and connected” world is also a better world. But his version of “openness” has its limits. After Zuckerberg decided to keep Thiel on board, he justified the move by appealing to his commitment to openness, even to views that one might not agree with—never mind that Thiel’s own views and actions are antithetical to openness in any other sense, especially if you’re a member of a free press or committed to the ideal of democracy. It’s doubtful that a chief ethics officer could have convinced Zuckerberg—a man largely unconstrained in imposing his dreams on more than 1 billion people across the globe—that his position on Thiel was morally problematic.
In light of these challenges, I think we are better served by reframing the question of ethics and tech. The solution is not to corporatize ethics internally—it’s to bring greater external pressure and accountability. Rather than position the problem as one of “bringing” ethics to companies like Facebook via a high-powered, executive hire, we should position it as challenging the structures that prevent already existing collaborations and ethically sound ideas from having a transformative effect.
Here, the lesson of chief privacy officers—a chief ethics officer’s natural analogy—is instructive. As professors Kenneth Bamberger and Deirdre Mulligan show in their book Privacy on the Ground, privacy officers have been effective in part because of the rise of the Federal Trade Commission as an active and engaged privacy regulator. In particular, FTC pressure has been integral to the development of a corporate attitude toward privacy that goes beyond mere compliance with the law and instead actively promotes and protects the interests of consumers. As Bamberger and Mulligan note, the threat of FTC oversight has helped generate “more forward-thinking and dynamic approaches to privacy policies.”
Without a major culture shift and increased external and regulatory pressure, the possibility that an ethics officer could spark widespread and necessary company reform remains limited.
In other words: You can’t simply shout “more ethics!” within corporate structures that prioritize economic gains and silence ethical voices, and expect change to happen. If ethics is to stand a chance, we need clear and increasingly potent means of holding tech companies accountable for their actions.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.