In February, BuzzFeed’s leadership announced that the company’s storied quiz operation was pivoting to A.I. OpenAI’s generative language tool ChatGPT has proven to be effective at regurgitating hackneyed cultural motifs back at its users, which makes it perfect for the platitudinal terrain of BuzzFeed quizzes. The company has gone all-in on the new revolution by adopting a text synthesis program modeled on ChatGPT’s technology, tiling the website with uncanny questionnaires—all scented with the trademark unspecificity of machine learning—and published under the byline “Buzzy the Robot.” Buzzy is listed on the masthead as an A.I. Creative Assistant, and I suspect that he’s not a member of the union.
“What If You Were A Disney Princess? This Quiz Will Answer That Question,” reads one of the characteristically mangled headlines written by Buzzy. (In practice, users fill out a form and reap a Mad-Libbed, Disney-themed short story dreamed up by the A.I. system.) Elsewhere, you can find “Be Honest, Your Life Sucks,” which promises to invent a brand-new existence for its patrons. (It’s the sort of gag you’d expect to find on ClickHole.) Futurism recently reported that BuzzFeed’s A.I. venture has escaped the confines of its quizzes section, and today entire articles, published under the guise of sentient agency, are appearing on the homepage—often accompanied by a modified byline that reads “As Told to Buzzy,” which conjures up the image of a salacious Playboy interview with a robot. BuzzFeed is at least the second major media organization to start automating their SEO-friendly posts. CNET infamously contracted out a swath of financial literacy guides to ChatGPT earlier this year, which inevitably provoked some surreal correction-ish addendums—written by actual human beings—to rectify the software’s various reporting blunders. An A.I. model might not fully understand the nuances of compound interest rates, but it’ll still attempt to articulate them with disastrous confidence.
Media companies are likely going to continue milking the A.I. experiment. If raw, unbridled content generation is a top priority—if an organization’s business model must be underwritten by vacant engagement—then a tool like ChatGPT has plenty of upside. The software can instantly fabricate a tide of benign explainers, ruthlessly optimized for aimless Google searches, for a tiny fraction of the cost of actually employing someone, which is why a lot of journalists feel threatened by its sinister potential. There are far too many labor insecurities innate to the press, but the fundamental commodity purpose of the news has never really been imperiled by mechanization. However, if Buzzy has already begun to master the art of the Sorting Hat quiz, perhaps it’s only a matter of time before he conquers the rest of the SEO tentpoles. Soon enough, the robots will be ranking Taylor Swift albums, or building galleries of red-carpet dresses, or—gasp!—relaying the time of the Super Bowl.
This is a fair apprehension. Outsourcing the global information distribution apparatus to various unaccountable A.I. models proves that rapacious publishers will truly never learn their lesson. It really wasn’t that long ago when rooting out misinformation was a national objective, and now we’re using A.I. to summarize how credit cards work. But I do think it is important to note that the cadaverous, mind-numbing SEO articles that ChatGPT aims to replace were historically assigned to the lowest and most exploitable rungs of a news organization’s faculty. These weren’t good jobs, and if you entered this business during the steady contraction of the digital media ecosystem, then you probably have a few trauma-flecked memories of being thrown into the gaping content mines, for very little pay, often in hot pursuit of the intangible (and ultimately counterfeit) currency of exposure. I know, because I lived it myself.
I started freelancing professionally in 2009, as a freshman in college, and I happily advertised my capabilities to anyone willing to take advantage. My editors smelled blood in the water, and baptized me in the stringent, impoverished SEO-content flood. I published album reviews for $5 a pop on a long-gone music blog called Prefix; I wrote Complex slideshows for 10 bucks an entry, and I did a ton of writing for my local alt-weekly, the Austin Chronicle, for exactly no money at all. (I’d also occasionally score an amateurish think piece at Vice. They’d pay me $50 per story, which felt like winning the lottery in comparison.) It was impossible to make a living off of these rates without becoming a little bit manic. I distinctly remember sitting in my first apartment and considering the feasibility of drafting 20 record critiques in a workday, which isn’t even a ChatGPT-quality pace. The sum total of the bylines would score me $100, which is slightly less than I made per shift at the pizzeria I worked at in high school. (It goes without saying that in my first four years as a full-time writer, I never cracked more than $50,000 in annual salary.)
I suspect the corporate overseers at all of those shops easily recouped their investment in my work. The Complex slideshows were gimmicked so that each entry was nested in its own bespoke webpage; scrolling through 100 slides would translate into 100 unique page views. (In 2016, Hearst paid something like $300 million to acquire Complex Media. You’re welcome, guys!) It’s a publishing model that relies on a decentralized labor pool of kids, amateurs, and anyone else bereft of any leverage or bargaining power whatsoever. From sharecropping SB Nation team blogs to credulous Huffington Post fellowships, the digital media sector was once singularly financed by their toil.
Some of these jobs have begun to go extinct, at least on official mastheads. The unionization wave in the media industry has mandated minimum (read: humane) staff salaries at a number of organized offices, which has made it much more difficult to pay some commiserative, entry-level rube $32,000 a year to slave over Disney quizzes. (Again, I’ve been there!) In that sense, ChatGPT has essentially emerged as a way to replicate that system, without the human cost, at previously unseen levels of pleonectic efficiency. Yes, tools like Buzzy the Robot will inevitably eliminate a certain type of menial journalism job. But I’d argue that those positions probably shouldn’t have existed in the first place, because—plain and simple—they fail to meet the basic criteria of being either a job or journalism.
Nobody who enters this field dreams of squeezing 500 bleary, spiral-eyed words out of a “What Time Is the Super Bowl” headline, because those stories only exist to grease the wheels of the decaying, threadbare ad-backed business model that, by its nature, necessitates a wealth of content that is totally ancillary to a newsroom’s overarching project. It is the sort of writing that forces one to wander into the lightless SEO slag pits on a daily basis, becoming progressively more zombified as the labor takes its toll. Nothing about the experience improves your prose, or sharpens your acumen, or makes you a more dogged journalist. If anything, you’ll come away from the experience feeling irrevocably jaded. That’s what makes Buzzy a natural fit for the gig; he’s incapable of internalizing despair.
I’d like to imagine that in an ideal world, media barons would put the residuals harvested by their chattelled automation programs toward creating a much more considerate onboarding environment for young reporters, critics, and bloggers, if only so those jobs will never be inflicted on human beings ever again. But that also requires me to believe that the boardrooms making these decisions will prioritize the best interests of their employees when presented with a cost-saving, payroll-shortening innovation. There’s a good reason why everyone is fearing the worst. When given the option, the media industry’s financiers tend to embrace contraction, which means it’s much more likely that automation will unleash a fresh slate of wild depravities in the media.
I can envision 22-year-olds being hired for the expressed purpose of auditing the flagrant errors committed by ChatGPT—fact-checkers to the machines—making sure they are accurately reporting the time of the Super Bowl. Or perhaps a future incarnation of the software will be capable of replicating the precise haughty tone of Bret Stephens columns, laying waste to opinion sections across the country. A.I. integration can surely avoid calamity if it’s handled with care by companies that are authentically committed to building a more equitable, sustainable newsroom. You can understand why I’m not feeling particularly optimistic.