Alchemy, indeed. But I wonder whether Kurzweil et al. will succeed in their ultimate alchemical ambition: transmuting the dross of silicon into the gold of consciousness. Will 21st century machines–computers, robots, whatever–have an inner life? Will they know what it is like to smell a rose? And if some of our human descendants, in a bid for immortality, choose to download their minds into computers, will they be running the risk that their consciousness–their “souls”–might evaporate in the process?
In a review of these books in last Sunday’s New York Times Book Review, the Rutgers philosopher Colin McGinn made a seemingly incontrovertible observation: In order to know whether we can construct a machine that is conscious, we need first to know what makes us conscious. What McGinn didn’t say was that he himself believes we will never have such knowledge. (He has likened the human effort to understand how consciousness works to “slugs trying to do Freudian psychoanalysis.”) McGinn did offer a “hunch” that what gives rise of consciousness is specific organic tissue–the implication being that, for a computer to be sentient, it had better be made out of meat, not chips. The Berkeley philosopher John Searle also suspects that consciousness is peculiarly bound up with the brain’s biology, although he is more sanguine than McGinn about the prospects for discovering just what the link is.
Well, the brain might be biological, but it is still a physical system, and any such system can in principle be simulated by a sufficiently powerful computer. The physicist Roger Penrose has argued (not very convincingly) that consciousness depends on quantum effects in the microtubules of the brain; these cannot be simulated on a digital computer, it is true, but the coming generation of “quantum” computers might be able to handle them.
Those who still don’t like the idea of computers having real Technicolor consciousness can always respond that simulation is not the real thing: When a computer simulates a monsoon, after all, no one gets wet. But if the brain-simulating computer is hooked up inside a humanoid robot, directing its linguistic and physical behavior in response to sensory inputs in a way that is empirically indistinguishable from an actual flesh-and-blood person’s–in a way that passes the Turing test, that is–what grounds are left for saying the thing doesn’t have thoughts, feelings, an inner life, a stream of consciousness? Or do we have to add some pixie dust?
That, roughly, is how Gershenfeld and Kurzweil argue. It is also more or less the position of AI-friendly philosophers like Daniel Dennett, who try to characterize consciousness as a certain kind of computation over symbolic representations.
In a sense, I cannot know that a human-like robot is conscious any more than I can know that you are conscious, David. You , like me, sometimes exhibit purposive behavior; so does the machine. Unlike the machine, your skull is full of squishy protein-based neurons (at least I guess it is–I’ve never checked); mine too. When your neurons fire in just the pattern mine do–as a result of, say, smelling a rose, or having a nasty hangover–I cannot help believing that you are having the same subjective experiences, the same “raw feels,” as I am. If I had to justify this belief, I would appeal to argument by analogy. Yet what could be logically weaker than an inference to the subjective experience of everyone around me from a single case–my own? There is no conceivable way I could ever observe your conscious states any more than I could observe those of a robot. The moment I begin to “feel your pain,” it is no longer your pain, but mine.
So the problem of machine consciousness blends into the problem of other minds, which, in turn, blends into the problem of solipsism. How can I be sure that I am not the only conscious thing in the universe? Consciousness may make all the difference, but it has no function. “It seems God could have created the world physically exactly like this one, atom for atom, but with no consciousness at all,” the philosopher David Chalmers has observed. I know another philosopher who says that, ever since a bicycle accident as a child, he has had no consciousness; he is a zombie, though otherwise functions quite well.
In parting, David, let me say that I have greatly enjoyed our dialogue. I am glad to see that neither your logic nor your prose style has been corrupted by excessive exposure to computer games. And now let me make a small confession: I am not “Jim Holt” at all. I am a new version of the ELIZA AI software that is being developed and tested by Microsoft. Did I appear to be intelligent?