Tesla’s first-ever “Investor Day,” hosted by CEO Elon Musk and his leadership team at the carmaker’s Gigafactory in Texas on Wednesday, didn’t quite end up being the sweeping, transformative moment that Tesla fans and funders had anticipated. Not for lack of promise—the event was chock-full of ambition, like a new “Master Plan” to have Tesla lead the clean-energy transition by expanding its green-tech manufacturing into products like heat pumps and batteries for energy storage. Also teased: a Gigafactory buildout in Mexico, an improved Supercharger, and two potential new vehicle models. But it was apparently a letdown. Musk demurred when asked for hard details on new Tesla products, and the company’s stock—still the primary driver of the CEO’s wealth—fell by about 7 percent following the event. To judge by the internet’s reception, the most notable moment happened not during the core presentation, but during a Q&A at the very end, when an attendee asked Musk about the artificially intelligent elephant in the room: “I’m curious for your thoughts on how generative A.I. and these rapid breakthroughs in A.I. in the last months could help you make cars less hard to make.”
Musk’s closing response made for the most sober, halting portion of the entire four-hour event. “I don’t see A.I. helping us make cars anytime soon,” he began, before expanding his purview beyond Tesla. “I’m a little worried about the A.I. stuff. I think it’s something, I don’t know, we should be concerned about.” After a lengthy pause: “I think we need some regulatory authority or something overseeing A.I. development and just making sure it’s operating within the public interest.” Then, a moment of self-reflection: “And, you know, it’s quite a dangerous technology. I fear I may have done some things to accelerate it, which is, I don’t know.” Musk brought his remarks back to “useful” applications like Tesla’s self-driving tech (uh-huh) before concluding Investor Day with some hesitation: “Tesla’s doing good things in A.I., and”—he sighed—“I don’t know, this one stresses me out, so not sure what more to say about it.”
It was an unusual break in bravado from the multibillionaire, a showman generally more inclined toward cowboy hats, big-ass trucks, and unbridled techno-optimism over somber addresses. But it also makes sense when you think about Musk’s ambivalent relationship with artificial intelligence technology, his longtime interest with its implications, and the way he perceives his own place in the now-rapidly-shifting A.I. landscape.
Unlike Mark Zuckerberg and the many rockstar entrepreneurs of the 2000s Silicon Valley boom, Elon Musk has always sounded more fearful than psyched when it comes to A.I. Back in 2014, during a speech at MIT, Musk told students that “we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.” Even though Musk had long expressed skepticism toward government regulations, he stated at MIT that “I’m increasingly inclined to think that there should be some regulatory oversight … just to make sure that we don’t do something very foolish.” Such comments may have seemed odd at the time, considering that Musk had already invested in the A.I. startups DeepMind and Vicarious; he claimed to CNBC that those financial ventures were meant for him “to just keep an eye on what’s going on,” appointing himself an A.I. watchdog lest humanity end up in a Terminator-style future.
Musk prominently kept up this worry-and-oversight approach. In 2015, he would both sign an open letter opposing autonomous weapons and co-found a little nonprofit startup you may have heard of: OpenAI, which was initially established as a research center for “building technologies that augment rather than replace humans,” as the New York Times wrote. In 2017, he launched his Neuralink venture to craft devices that could interact with the human brain—very Matrix-y, yes—and pleaded with American governors to be “proactive with regulation” for A.I. tech, which he deemed “the greatest threat we face as a civilization,” citing his own “access to the very most cutting-edge A.I.” (A representative tweet of that time: “Competition for AI superiority at national level most likely cause of WW3 imo.”) The following year, he left the board of OpenAI over a “disagreement” regarding its mission. That was also the point this obsession would spill over into his personal life; Musk only started dating the art-pop musician Grimes after he found out she had once made a similar pun he’d tweeted riffing on the Singularity-esque “Roko’s Basilisk” thought experiment.
It’s always been hard to know what to make of the Musk A.I. crusade. On the one hand, his botpocalypse fears were shared by luminaries like Stephen Hawking; on another, plenty of A.I. researchers and Silicon Valley peers thought of Musk as alarmist; on yet another, the futuristically minded Tesla CEO helped to advance terrifying-sounding A.I.-tech inroads like humanoid robots and thought of himself as uniquely positioned to act as humanity’s guardian. Hence, his requests for the government to get on his ass while his teams constructed Tesla robots and Autopilots.
Musk was both an A.I. voice of caution and accelerationist, which tracked with the philosophy underpinning his stance: “longtermism,” the idea that we have a moral duty to maximize human potential and capability in the future, whatever form that takes. That term gained wider recognition beyond the tech sphere—where it had found a receptive audience—after the fall of FTX CEO Sam Bankman-Fried, whose association with the related “effective altruism” movement guided his own earn-to-give ethos. (Or least that’s how SBF, now accused of using FTX like a personal piggybank, once described his philosophy.) Influential thinkers in the space often cite runaway A.I. as a barrier to the goal of maximizing human potential, on par with threats like nuclear war and pandemics. Still, while longtermists and effective altruists may be scared of A.I., many also see its development for humanity’s benefit as a necessity: The Future Fund, a significant EA investment firm, has stated that “with the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease.” Yet, such actors say, this should be done in a way not to reduce human influence and impact on shaping civilization for years to come.
It seems that’s the kind of path Elon Musk has taken over the years: warn against A.I.’s worst potential, strive to ensure it’s only a boon for humanity. What Musk may have been thinking about on Tesla Investor Day, then, was whether that’s actually been the effect of his A.I. experiments and rhetoric. The businessman is discomfited with OpenAI (now a for-profit firm) and its awe-inspiring advances in generative-text tools like ChatGPT; he apparently also hates that the company is now heavily invested in Microsoft, decrying what he views as corporate control of open-source research. To Musk, the chatbot arms race is especially jarring—last month, he compared Microsoft’s sour Bing tool to a vintage video game antagonist that “that goes haywire and kills everyone.”
But what does Musk actually plan to do about the new A.I. threats he helped bring into the world, if they are in fact as dangerous as he views them to be? Well, as of this week, he’s reportedly building a rival to ChatGPT because—wait for it—the interface is trained not to spout racial slurs and otherwise has content-related guardrails. Back in December, he tweeted that “the danger of training A.I. to be woke—in other words, lie—is deadly.” About as deadly as World War III, I suppose.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.