Computers and Artificial Intelligence

Some people who work in artificial intelligence argue that the case of the human brain is not really very different in principle from the case of very sophisticated computers. No matter how long you look at the physical hardware of a computer, you will never see sentences and thoughts. Yet computers regularly produce sentences and can answer questions we put to them. Those sentences are not nonphysical entities, in addition to the physical bits of which computers are constructed. Computers are made up of nothing but straightforward physical components. Yet they can solve problems, beat grand masters at chess, and answer complicated questions.

The fact that you cannot see the thoughts of computers by examining their hardware is, it may be said, analogous to the way in which you cannot see the thoughts of humans by examining their brains. In both cases, there is nothing present but physical components. Thoughts are the activities of complex physical systems. They are not additional "ghostly" or "spiritual" entities.

But do computers really think? There are two main reasons for concluding that they do not. First, no one believes computers are conscious. They do not represent the world to themselves or feel pleasure and pain. Second, computers simply follow programmed routines. They have no original insights and do not devise new proofs. They just do what they have been programmed to do, no more and no less.

The philosopher John Searle (1980) has invented the story of a "Chinese room," as an analogous case. In one version of the story, Chinese characters are sent into the room, forming questions, and out of a window come a set of answers. In fact, there are workers in the room who do not understand Chinese, but they have been issued a set of rules specifying things like, "If you receive character x, then you must send out characters y and z." They never understand what the questions or the answers are, though the answers are all correct.

A computer, says Searle, is like that. Characters go in and answers come out. But the computer does not understand what is going on. It just follows the rules programmed into it. Computers never give inventive or surprising answers (unless their wiring has gone awry). They make no leaps of imagination. They have no idea about what is going on. At the beginning and end of the process, someone understands the question and the answer. That act of intellectual understanding is something a computer can never have.

Such a conclusion is controversial. Daniel Dennett, for example, says that Searle's example is much too simplistic. Any Chinese room that could serve as an analogy for human consciousness would have to be able to pass the Turing test (that is, it would have to be indistinguishable in its answers from those a human person could give). But then the internal system would have to be so incredibly complex that it would no longer be like a simple digital computer program at all. "Understanding," Dennett says, "could be a property that emerges from lots of distributed quasi-understanding in a large system" (1992, 439). Complexity matters, and it could be that a really complex but purely physical system formed of lots of smaller subsystems, none of which was conscious, would understand what was going on.

There is no dispute that no actual computer or robot is conscious. The tasks at which computers outperform people are algorithmic processes that can be completed with extreme speed but whose routines are completely specifiable mathematically. There is no awareness or, what is closely connected to it, imaginative insight.

Proponents of "strong artificial intelligence" believe that some future, parallel-processing supercomputer could become conscious and creative. I see no reason to deny that. It amounts to saying that we could, in principle, construct a complex neural network that might generate consciousness, a sense of continuing self and of moral responsibility. Something like this is being imagined by those who envision the possibility of downloading memories and behavioral dispositions into nonhuman physical forms.

Freeman Dyson, for instance, defines life as "a material system that can acquire, store, process, and use information" (2002, 144). Then he envisages "a transhuman living in a silicon computer" and "a black cloud living in interstellar space" (after a novel by Fred Hoyle). And Martin Rees speaks of "thoughts and memories" being "downloaded into complicated circuits and magnetic fields in clouds of electrons and positrons" (200i, ii7).

If we allow the future possibility of what at present are pleasing (or alarming) flights of fancy, this would not show that consciousness is "nothing but" electrons and positrons. It would show that consciousness is generated by complex arrays of electrons in the right configuration. Artificial minds will still be minds. It is a poor argument to say, "We are just like computers. Computers are not conscious. Therefore, we are not conscious." The argument should be: "We are conscious. We could, in principle, make computers just like us. Then computers would be conscious, too."

In fact, once we accept the notion that minds can be downloaded into other material forms, we are able to separate, at least in thought, the information content of minds and their specific physical embodiment. What computer analogies really show is not that we are just machines. They show that we—the very same conscious intelligent and responsible agents with a rich inner life of memories, hopes, and fears—could possibly be reembodied in very different forms. It would not be totally surprising, if and when this happened, to find ourselves coexisting with a number of artificially created intelligences—as well, perhaps, as dogs and cats and monkeys and who knows what else.

 
< Prev   CONTENTS   Next >