As a child of the ’80s, I can divide my life cleanly into Before Google and After Google. Right around the turn of the millennium, the internet stopped being a tangled thicket of incomplete lists of weird stuff and became a useful research database. Ever since, Google Search has been one of the only technological constants of my adult life, persisting through the rise of smartphones, social media, streaming services, and even drone-based burrito delivery (also, what happened to that?).
In all that time, nobody has been able to challenge Google’s role as gatekeeper to the cornucopia of digital abundance. More than 90 percent of internet users around the world use Google to shop, navigate, and satisfy their curiosity about pretty much everything. The ads Google sells against this activity (and on other websites) have fueled a money-printing machine that generated more than a quarter-trillion dollars in sales last year.
Google started off as a kind of mapmaker for the internet, but through its phenomenal success gradually became the architect of the World Wide Web. Today entire industries from retail to car insurance depend on how Google manages search results and online advertising. The electronic frontier became more and more corporate, organized according to the tyranny of the click: How many users click on your ad, your headline, your video? That number determines how much money you can make online (and explains the mind-numbing plethora of articles written with the express purpose of popping up when you type, for example, What time do the Oscars start?).
The click-based economy has made the world more efficient in some ways, but it turned this miraculous global information databank into a frenzied real estate auction with every website scrabbling to climb to the top of the search results, collect the most clicks, and retain the most eyeballs. Every webpage you load is a little slower thanks to the back-end auctions to determine which ads you’ll see. Countless professional journalists fought losing battles against the small-minded metrics of clicks and open rates, and then adapted to them, making “search engine optimization” among the most treasured journalistic skills. YouTube and social media sites chase clicks so intently that they can inadvertently create algorithms that hook users with increasingly salacious and radicalizing content. Google has built an internet where the people with the most clicks win, and Google plays a key role in counting those clicks.
What if that all changes?
The arrival of OpenAI’s ChatGPT in late 2022 sent shock waves through tech company boardrooms. Google’s rival Microsoft wasted little time using its stake in OpenAI to create a beta version of something new: a conversational agent connected to Microsoft’s own search engine, Bing. Google hooked up its own next-generation chatbot, Bard, to its core search product. It’s early days, and it shows: Microsoft’s chatty new Bing beta recently creeped out a New York Times reporter with its megalomania and amorous advances. Bard, meanwhile, made a factual error in its release demo, sending shares of parent company Alphabet plummeting.
The speed with which Google has moved to introduce a half-baked A.I. tool into its biggest moneymaker, despite the threat Bard itself could represent to that moneymaker’s business model, tells you just how seriously our established gatekeeper to all the world’s information is taking this moment. (A Google spokesperson, after the publication of this article, reached out to say that the company has no intention of replacing Search with Bard and that Bard, as a chat tool, is distinct from other large language model–based A.I. features previewed at the same time, including one that does work with Search to distill information from across the web. The spokesperson said that Bard was not already “changing the direction” of Search and reiterated that neither tool has launched publicly.)
What would it mean to replace the click economy and its cornerstone, the search bar, with something like a conversation? This is what Bard and a ChatGPT-powered Bing are offering: the chance to ask more human questions (Where’s the best place to get a Christmas-style burrito around here, and what drones would you recommend to carry it aloft?) and have sustained conversations with a system that retains context. (Though notably, in an attempt to rein in some of its chatbot’s zanier behavior, Microsoft recently limited users to five questions per session.) Instead of offering you a menu of links (and ads), your interlocutor/informational concierge cuts to the chase, perhaps offering some footnotes for further reading. It will even offer up its answers in a pirate-y voice or rhyming couplets if you ask.
Before Google Search came along and devoured the industry of digital information access, this kind of synthesis was what everyone thought our digital future would look like. Early visionaries like Vannevar Bush foresaw the ocean of information that we swim in and imagined systems that would allow us to follow “trailblazers” and synthesizers. Science fiction writers in the 1980s and ’90s imagined A.I. constructs who acted like (and were sometimes called) librarians, like the polite subsystem in Neal Stephenson’s Snow Crash that can summarize books, correlate information, and conduct long conversations with humans. DARPA, the U.S. military’s research wing, invested millions into a project called the Personal Assistant that Learns—PAL—to build something similar in real life for military commanders. Eventually, that research led to Siri, and with it the dream of a computer you could really talk to.
A conversation-based interface would be a radical shift from how we have all trained ourselves to work with keyword-based systems like Google. When I have a complicated question to ask the internet, I often have to reverse-engineer my query, trying to imagine possible scenarios in which someone might have answered it that might be very different from my context. The list of search results that comes back, with sponsored links at the top, offers me choices about which trail to follow, which authority to believe. Every internet user quickly learns to eyeball a link’s seeming credibility and utility by its URL and how it shows up in Google Search.
Replacing that database query with a conversation represents a transformation in what Google has long called its users’ “quest for knowledge.” The classic search bar strives to be ubiquitous, essential, and almost invisible. But these new chatbots are not stepping out of the way. They’re stepping forward, shaking hands, presenting personality and affect in their interactions with users. They offer synthesis, extrapolation, and iterative refinement through follow-up questions and dialog. They offer the illusion of judgment.
Instead of a list of possible sources, we have a single voice. When users interact with Bing (R.I.P. Sydney) or Bard, the underlying sites are tucked away as footnotes or obscured entirely. They don’t show you their math. It’s tantalizing for all of us who have ever muttered “just tell me the answer already” in frustration when Google Search fails to deliver, but it’s also troubling. Setting aside the well-documented problem these systems have with getting things wrong, making things up, and creeping people out, the illusion of a single, coherent answer can be dangerous when the nature of truth is complicated and contested.
The difference between a question and a database query has huge implications for how we relate to the sprawling universe of human knowledge, and to each other. A list of search results, however curated and manipulated it might be, still stands as a reminder that there might be competing and conflicting answers to your question. A conversational interface with a charming and glib A.I. tucks all of that messiness behind the curtain. These systems could become yet another layer of obfuscation between us and the source code of human knowledge. Another black box, but one that talks, tells jokes, and can write you a sonnet on command. Ironically, OpenAI is trying to deal with the persistent problem of these systems “hallucinating” false information by teaching them to validate their results using a search engine.
But it’s going to be a lot harder to sell clicks from a bot. What happens to the click economy if a smooth-talking A.I. becomes a strange mutant of a spokesperson and a magic eight ball, or something like an avatar for the sum total of human knowledge? Information-rich resources like newspapers and discussion boards might find that these systems are harvesting their material and rephrasing it so eloquently that nobody ever bothers to navigate to the original page. It seems like a recipe for sliding farther down the sketchy slope of back-end payola where content creators are dependent on tech giants to offer them some slice of the revenue pie with no way to independently verify their numbers.
The question of what a business model might be for these new gatekeepers, and even more so for the suddenly invisible providers of that information across the internet, gets to a deeper issue: We are talking about putting a new architect in charge of the internet. Search engines rely on hyperlinks, those explicit connections between words and pages that are legible and programmable by humans. Starting from the dawn of the modern encyclopedia, you could argue that the whole structure of empirical human knowledge is built out of the rivets and bolts of footnotes and cross-references.
Contrast that with large language models like ChatGPT: machine learning systems that, by design, identify complex relationships between words and phrases based on probabilities, leading some to call them “stochastic parrots.” No human, not even the engineers who built them, can glean much insight into how those associations work across thousands or millions of variables, or, more importantly, why they make particular associations. And that makes it much harder to correct mistakes or prevent harm without relying on kludgy filters and censorship. A shift from links to probabilistic relationships is like moving from Newtonian physics to quantum weirdness, or from truth to truthiness. How do you know? Because the chatbot told you so.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.
Update, March 6, 2023: This article has been updated to include a comment from a Google spokesperson on the difference between Bard and other A.I.-based tools planned for Google Search.