Do We Need an “Artificial Intelligence Nanny”?

Over on h+ magazine, a publication dedicated to far-term technology, Ben Goertzel proposes the idea of an “artificial intelligence nanny”: “a powerful yet limited AGI (Artificial General Intelligence) system, with the explicit goal of keeping things on the planet under control while we figure out the hard problem of how to create a probably positive Singularity.” (The Singularity, for those not fluent in futurism, refers to the creation of some beyond-human intelligence.) Goertzel wants us—or rather, the Singularity crowd—to consider inventing a protector to keep humans from destroying ourselves with synthetic biology, nanotech, or some malevolent artificial intelligence; he also thinks that this AI nanny could save us from a evil super-team of terrorists and tech geniuses bent on destruction. He admits that creating an AI nanny would be technologically challenging—at the moment, impossible—but says that the tools to do so would emerge in tandem with the dangerous innovations that would require such a benevolent baby-sitter.

When you put it that way, doesn’t the future seem terrifying?

Read more on h+.