This article is from Big Technology, a newsletter by Alex Kantrowitz.
You can already see the machine at work. Corporations, politicians, threadbois, and “thought leaders” are probing and prodding, searching desperately for ways to use surging curiosity about all things artificial intelligence to mask problems, gain favor with the public, and monetize attention. Amid real technological progress in the field , they’re forging a broad, cynical, and craven A.I.-PR industrial complex that’s just now coming into focus.
This A.I.-PR industrial complex is growing larger and worse than its predecessors—even crypto!—because the technology is making anything seem possible. With so much opportunity, vacuousness fills the gaps, and exploitation follows. Random academics are hitting the speaking circuit to declare ChatGPT could turn us into paper clips. Middling politicians are writing implausible bills hoping to land on the Sunday talk shows. And CEOs are using A.I. as an excuse for absolutely anything that goes wrong in their business.
With real innovation at hand, it can be difficult to distinguish nonsense from progress, but there are some telling signs. Companies declaring they’re using AI to replace human workers, for instance, are almost always just papering over more serious internal problems. Their statements typically indicate they’re thinking small, are looking to reduce their workforce anyway, or both.
Take IBM, for example. Just this week its CEO Arvind Krishna said his company would pause hiring for back-office roles that A.I. might replace, suggesting AI could take over as many as 7,800 positions. Technology this powerful should be able to make workers more productive, not unnecessary. And IBM itself sells that very idea to those who buy its Watson A.I. service. In its marketing materials, it says Watson helps “free up employees to focus on higher value work.” The incongruity is revealing.
Instead of a company responding to a technological sea change, IBM feels more like a diminished giant using A.I. as a pretext to cut costs. Those close to the company say its statements—while suggesting an A.I. sheen — mask the reality of a big, slow organization that would struggle to hand thousands of jobs to A.I.. “Blanket statements work for people that don’t understand how technology can scale across the enterprise,” one ex-IBM employee told me. “It just completely gives more rationale to have more leeway to make more reductions.”
In Washington, meanwhile, politicians and regulators are racing to shout about A.I. There’s indeed merit in examining the technology’s opportunities and risks, but much of the recent wave of interest feels more like posturing than productivity. In late April, for instance, a group of U.S. representatives introduced a bill to prevent A.I. from firing a nuclear weapon. Nobody is handing that decision-making to machines, though, and if an out-of-control A.I. decided to fire a nuke, a bill won’t stop it. But the press release generated plenty of headlines. So, mission accomplished.
Regulators are jumping into the fray as well. Federal Trade Commission Chair Lina Khan, for instance, dropped a New York Times op-ed on Wednesday that was heavy on warnings but light on specifics. Khan suggested that generative A.I. technology that enables scammers could face potential regulatory action but didn’t spell out under what laws or rules.
“The theories are very vague at this point,” one former FTC lawyer told me, of ideas to regulate A.I. “My frustration is that the agencies are talking a lot of talk but not providing much transparency or guidance on how they think it should actually work.”
Plenty of A.I. announcements have real substance to them, but the A.I.-PR industrial complex will grow exponentially as the technology pushes forward. There’s just too much attention that comes with saying the term “A.I.” for anyone to stop now. ChatGPT isn’t the only part of the A.I. boom that sometimes just makes stuff up.