Advertisement

Hi, Computer! How’s Tricks?

Share

An essay by Robert Wright in The Sciences, a publication of the New York Academy of Sciences, caught our eye and set off a good deal of head-scratching. Simply put, Wright wonders whether computers will ever have consciousness, that is, whether “someone may someday build a computer so complex, or a robot so sensitive, that it will be aware of its calculations.”

To be sure, such a machine is well beyond the current state of knowledge. Today’s computers, impressive as they are, don’t even hint at the complexity required for the kind of thought Wright is talking about. Still, the human brain does it, and if one information-processing physical device can do it, why can’t another? Can computer scientists, armed “with the right hardware, the right software, and enough time . . . create computers flushed with pride, riddled with doubt, or alienated by the rapid pace of technological and social change”? Wright asks.

The first thing to consider is whether any other living creatures besides people have consciousness. To be sure, all animals respond to their environments, but do they think about it? Do dogs think about what it is like to be a dog? Do birds think about birdness? Do ants have such thoughts? Do amoebas? Where is the line between instinct and mind? Wright says that we can never know what it feels like to be a dog, a bird, an ant or an amoeba and that all we can ever hope to know is whether it does feel like something to be those things. If it does, they have consciousness. If it doesn’t, they don’t.

Now on to computers. If they could have consciousness, it would bridge the gap between animate and inanimate objects and prove that the brain is nothing more than a very complex machine. A computer programmer, armed with a machine that exactly parallels a human brain “could control the computer’s subjective state with precision, inducing agony or ecstasy at the touch of a button,” Wright says.

Advertisement

His conclusion is that such a machine is not only unlikely but impossible. Even if a robot is built someday that had states of mind, “it wouldn’t be like anything to be him.”

“If you stumbled onto a planet of such robots,” Wright argues, “there would be nothing immoral about murdering a few of them, because their lives would have no meaning anyway and would be incapable of acquiring any. They would possess no potential for pleasure or pain, for satisfaction or regret. This planet would have cohesive teams, yet no sense of fraternity; stable marriages, yet no love.”

Nor is it likely that these computers would be able to think about consciousness and write essays that explore these questions. The existence of computers puts perennial issues of mind and body in a contemporary light, but computers themselves won’t solve the problems. Only conscious brains can think about them.

Advertisement