Advertisement

SCIENCE / MEDICINE : If It Thinks and Talks Like a Human, Does That Make It Human? : Computers: Theorists are challenging the belief that artificial intelligence can be the equivalent of a person’s.

Share
THE WASHINGTON POST

In the classic test devised some 40 years ago by the British mathematician Alan Turing, a man is confronted with two locked, adjoining rooms. In one is a woman and in the other is a computer. By asking questions through a keyboard and reading each room’s typewritten answers, the man is supposed to determine which room has the human.

What Turing argued--and what quickly became gospel for scientists working in the field of artificial intelligence, or AI--was that if a machine could be programmed to respond to questions as convincingly and intelligently as a human, it would in some small way be human, or at least the computer could be said to possess the equivalent of human intelligence.

In the last few years, however, as new and more sophisticated computers have emerged and AI researchers have raised anew the question of intelligence and even of artificial consciousness, the adequacy of the Turing test has become intensely controversial.

Advertisement

Led by John Searle, a philosopher at UC Berkeley, a small band of computer theorists insists that passing for a human in conversation is not the same as thinking like a human.

All of this has prompted a great deal of arguing back and forth in academic journals. Most people involved in the AI effort agree that someday--perhaps within the next 50 years--a computer will be built capable of passing the Turing test.

How the questions are resolved is likely to influence the way new, more advanced computers come to be used by society. Will we, as some suggest, have created a new form of conscious intelligence? Or is something that talks like a human and that may even walk like a human not necessarily be a human at all?

Consider the leading critique of the Turing test, the so-called Chinese room theory put forth 10 years ago in a now-famous essay by Searle in the journal Behavioral and Brain Sciences.

Imagine, Searle said, that all the questions asked by the interrogator were in Chinese, a language that the woman in the room adjoining the computer does not understand. But she does have a large reference book in English that outlines the rules for responding to every set of Chinese characters that the interrogator might transmit.

Given a question, the woman would look up the Chinese characters in her book and, applying the rules found there, respond with an appropriate answer.

Advertisement

Any computer capable of passing the Turing test, Searle said, is in fact doing no more than the woman in the Chinese room.

From the outside it may appear to be conducting an intelligent conversation. But it is simply running through a program, manipulating its own “Chinese symbols” according to a giant rule book without actually understanding the meaning of the words.

Searle does not claim that a machine will never be able to think like a human. That, he says, may be possible in some distant day. What his theory suggests is that the way the human mind works is not something that can be broken down and duplicated simply by writing a computer program.

“The mind is an ordinary biological phenomenon like any other part of the body,” Searle says. “It is a function of the brain like digestion is a function of the stomach. . . . When these guys say they will build a computer that has thought, they might as well say they can build a computer that can digest pizza. The one is just as dumb as the other.”

To say that Searle’s Chinese room theory touches a raw nerve in the AI community would be an understatement. Over the past 20 years, a number of theorists have been attempting to do just what Searle claims cannot be done, writing more and more sophisticated programs in an attempt to capture the inner workings of the brain.

In one instance a computer program known as Parry managed to fool several psychiatrists into thinking they were talking to a paranoid patient. Or consider the following exchange, which is a transcript of a conversation between a patient and a computer programmed as a therapist:

Advertisement

Patient: I seem to have lost interest in everything.

Computer: Tell me more about such feelings.

Patient: I have some idea why I feel this way, too.

Computer: You have?

Patient: It seems that I miss having a family.

Computer: What makes you feel you miss having a family?

Advertisement

Of course, mimicking exchanges between therapist and patient is a far cry from duplicating normal conversation. But few researchers have taken kindly to the suggestion that programs they write will never amount to more than simple symbol manipulators.

How fair is it, critics ask, for Searle to insist there is a clear-cut distinction between simple manipulation of symbols and true understanding? Children often use big words or do arithmetic without really understanding what they are doing.

Some of Searle’s critics say that “understanding” is actually not something that someone either has or does not have. Just as children slowly grasp the substance of the concepts or words that they use over time, so could the woman in the Chinese room.

There are also questions raised about the adequacy of Searle’s entire analogy of the woman leafing through a rule book to respond in Chinese. Writing down all the instructions she would need, say his critics, would take up hundreds of thousands of volumes and make her task quite impossible.

Searle counters by substituting for his room a “Chinese gymnasium” full of millions of people who speak English, with each person running around with one small booklet and helping out with one part of the task. The gymnasium, he says, could easily fool the interrogator outside, but it still would not have any real understanding of what it was doing.

Or would it? With that much activity going on, some AI researchers believe, it is quite easy to imagine that the gymnasium could reach a kind of critical mass, taking on properties not found in the smaller room.

Advertisement

The analogy is made to the human brain, which, like the gym, has billions of neurons, each individually oblivious to the brain’s larger purpose but each connected to many others. Collectively the neurons are capable of conscious thought.

Of course, that is only an analogy. What everyone agrees on is that the reason there is an argument over the Chinese room is that no one really knows how the brain works or how thinking and consciousness arise out of the mush of human gray matter.

“Searle has a biological prejudice,” said Indiana University computer scientist Douglas Hofstadter, an outspoken opponent of Searle’s. “I believe that the brain is governed by the laws of physics. There is a gulf between us. It’s almost a religious divide.”

Advertisement