Last week, the nonprofit research group OpenAI revealed that it had developed a new text-generation model that can write coherent, versatile prose given a certain subject matter prompt. However, the organization said, it would not be releasing the full algorithm due to “safety and security concerns.”
Instead, OpenAI decided to release a “much smaller” version of the model and withhold the data sets and training codes that were used to develop it. If your knowledge of the model, called GPT-2, came solely on headlines from the resulting news coverage, you might think that OpenAI had built a weapons-grade chatbot. A headline from Metro U.K. read, “Elon Musk-Founded OpenAI Builds Artificial Intelligence So Powerful That It Must Be Kept Locked Up for the Good of Humanity.” Another from CNET reported, “Musk-Backed AI Group: Our Text Generator Is So Good It’s Scary.” A column from the Guardian was titled, apparently without irony, “AI Can Write Just Like Me. Brace for the Robot Apocalypse.”
That sounds alarming. Experts in the machine learning field, however, are debating whether OpenAI’s claims may have been a bit exaggerated. The announcement has also sparked a debate about how to handle the proliferation of potentially dangerous A.I. algorithms.
OpenAI is a pioneer in artificial intelligence research that was initially funded by titans like SpaceX and Tesla founder Elon Musk, venture capitalist Peter Thiel, and LinkedIn co-founder Reid Hoffman. The nonprofit’s mission is to guide A.I. development responsibly, away from abusive and harmful applications. Besides text generation, OpenAI has also developed a robotic hand that can teach itself simple tasks, systems that can beat pro players of the strategy video game Dota 2, and algorithms that can incorporate human input into their learning processes.
On Feb. 14, OpenAI announced yet another feat of machine learning ingenuity in a blog post detailing how its researchers had trained a language model using text from 8 million webpages to predict the next word in a piece of writing. The resulting algorithm, according to the nonprofit, was stunning: It could “[adapt] to the style and content of the conditioning text” and allow users to “generate realistic and coherent continuations about a topic of their choosing.” To demonstrate the feat, OpenAI provided samples of text that GPT-2 had produced given a particular human-written prompt.
For example, researchers fed the generator the following scenario:
In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
The GPT-2 algorithm produced a news article in response:
The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science. Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved. Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow. Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.
Other samples exhibited GPT-2’s turns as a novelist writing another battle passage of The Lord of the Rings, a columnist railing against recycling, and a speechwriter composing John F. Kennedy’s address to the American people in the wake of his hypothetical resurrection as a cyborg.
While researchers admit that the algorithm’s prose can be a bit sloppy—it often rambles, uses repetitive language, can’t quite nail topic transitions, and inexplicably mentions “fires happening under water”—OpenAI nevertheless contends that GPT-2 is far more sophisticated than any other text generator that it has developed. That’s a bit self-referential, but most in the A.I. field seem to agree that GPT-2 is truly at the cutting edge of what’s currently possible with text generation. Most A.I. tech is only equipped to handle specific tasks and tends to fumble anything else outside a very narrow range. Training the GPT-2 algorithm to adapt nimbly to various modes of writing is a significant achievement. The model also stands out from older text generators in that it can distinguish between multiple definitions of a single word based on context clues and has a deeper knowledge of more obscure usages. These enhanced capabilities allow the algorithm to compose longer and more coherent passages, which could be used to improve translation services, chatbots, and A.I. writing assistants. That doesn’t mean it will necessarily revolutionize the field.
Nevertheless, OpenAI said that it would only be publishing a “much smaller version” of the model due to concerns that it could be abused. The blog post fretted that it could be used to generate false news articles, impersonate people online, and generally flood the internet with spam and vitriol. While people can, of course, create such malicious content themselves, the implementation of sophisticated A.I. text generation may augment the scale at which it’s generated. What GPT-2 lacks in elegant prose stylings it could more than make up for in its prolificacy.
Yet the prevailing notion among most A.I. experts, including those at OpenAI, was that withholding the algorithm is a stopgap measure at best. Plus, “It’s not clear that there’s any, like, stunningly new technique they [OpenAI] are using. They’re just doing a good job of taking the next step,” says Robert Frederking, the principal systems scientist at Carnegie Mellon’s Language Technologies Institute. “A lot of people are wondering if you actually achieve anything by embargoing your results when everybody else can figure out how to do it anyway.”
An entity with enough capital and knowledge of A.I. research that’s already out in the public could build a text generator comparable to GPT-2, even by renting servers from Amazon Web Services. If OpenAI had released the algorithm, you perhaps would not have to spend as much time and computing power developing your own text generator. But the process by which it built the model isn’t exactly a mystery. (OpenAI did not respond to Slate’s requests for comment by publication.)
Some in the machine learning community have accused OpenAI of exaggerating the risks of its algorithm for media attention and depriving academics, who may not have the resources to build such a model themselves, the opportunity to conduct research with GPT-2. However, David Bau, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory, sees this decision more of a gesture intended to start a debate about ethics in A.I. “One organization pausing one particular project isn’t really going to change anything long term,” says Bau. “But OpenAI gets a lot of attention for anything they do … and I think they should be applauded for turning a spotlight on this issue.”
It’s worth considering, as OpenAI seems to be encouraging us to do, how researchers and society in general should approach powerful A.I. models. The dangers that come with the proliferation of A.I. won’t necessarily involve insubordinate killer robots. Let’s say, hypothetically, that OpenAI had managed to create a truly unprecedented text generator that could be easily downloaded and operated by laypeople on a mass scale. For John Bowers, a research associate at the Berkman Klein Center, what to do next may come down to a cost-benefit calculus. “The fact of the matter is that a lot of the cool stuff that we’re seeing coming out of A.I. research can be weaponized in some form,” says Bowers.
In the case of increasingly sophisticated text generators, Bowers would press for releasing the algorithms because of their contributions to the field of natural language processing and practical uses, though he acknowledges that important developments in A.I. image recognition could be leveraged for invasive surveillance. However, Bowers would lean away from trying to advance and proliferate an A.I. tool like that used to make deepfakes, which is often used to graft images of people’s faces onto pornography. “To me, deepfakes are a prime example of a technology that has way more downside than upside.”
Bowers stresses, however, that these are all judgment calls, which in part speaks to the current shortcomings of the machine learning field that OpenAI is trying to highlight. “A.I. is a very young field, one that in many ways hasn’t achieved maturity in terms of how we think about the products we’re building and the balance between the harm they’ll do in the world and the good,” he says. Machine learning practitioners have not yet established many widely accepted frameworks for considering the ethical implications of creating and releasing A.I.-enabled technologies.
If recent history is any indication, trying to suppress or control the proliferation of A.I. tools may also be a losing battle. Even if there is a consensus around the ethics of disseminating certain algorithms, it might not be enough to stop people who disagree.
Frederking says an analogous precedent to the current conundrum with A.I. might be the popularization of consumer-level encryption in the 1990s, when the government repeatedly tried and failed to regulate cryptography. In 1991, Joe Biden, then a senator, introduced a bill mandating that tech companies install back doors that would allow law enforcement to carry out warrants to retrieve voice, text, and other communications from customers. Programmer Phil Zimmerman soon spoiled the scheme by developing a tool called PGP, which encrypted communications so that they could only be read by the sender and receiver. PGP soon enjoyed widespread adoption, undercutting back doors accessible to tech companies and the government. And as lawmakers were mulling further attempts to stem the adoption of strong encryption services, the National Research Council concluded in a 1996 study that users could easily and legally obtain those same services from countries like Israel and Finland.
“There’s a general philosophy that when the time has come for some scientific progress to happen, you really can’t stop it,” says Frederking. “You just need to figure out how you’re going to deal with it.”
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.