Advertisement

BOOK REVIEW : Exploring Connections of the Mind

Share

Apprentices of Wonder: Inside the Neural Network Revolution by William F. Allman (Bantam Books: $18.95; 211 pages)

The effort to get computers to think like people has been under way for 40 years now, and despite much work and much hype, the goal remains as elusive as ever.

Even the staunchest members of the artificial intelligentsia concede that it hasn’t worked. Computers are too rigid. They have neither judgment nor common sense, and no one has a clue how to give them these skills, which humans take for granted.

Advertisement

Some breakthrough, some new approach, will have to be made before machines exhibit meaningful “thought.” We are nowhere near having computers that can understand jokes and laugh at them.

In the last few years, a group of computer scientists, neuroscientists, psychologists and philosophers have been exploring new ideas about machine intelligence that turn the field on its head, so to speak.

Mind and thought emerge from the millions of connections in the brain, this group says. If you want to mimic what the brain does, you have to abandon the linear, logical processes that computers do so well and substitute a web of interconnections that are like the connections among the neurons, or nerve cells, of the brain.

This approach is called “neural networks” or connectionism, and William F. Allman brings us up to date on this research in “Apprentices of Wonder,” a clear-eyed study of the work and the people who are doing it. Concentrating on a few of the major researchers in the field, Allman incisively describes its promise, its excitement and its shortcomings.

“Scientists are creating a new model of the mind, one that attempts to understand how the mind works by examining how the brain works,” Allman says at the outset.

Of course, how the brain works is no easy question either. How does flesh and blood produce mind?

Advertisement

It now appears that thought emerges from the interaction of millions of individual neurons. It’s a mistake to concentrate on each individual element. How they behave together is where the action is.

John Hopfield, a physicist turned brain researcher at Caltech, tells Allman: “Many of the mysteries of the brain are what in physics we call emergent properties . They arise from very large numbers of elements. Though they are the consequences of millions of microscopic relationships, emergent properties often seem to take on a life of their own.”

A basic premise of the neural network movement is that our brains are not logic machines. That’s why traditional artificial intelligence has come a cropper. Despite the standard model of “good thinking,” we really don’t do many things by logic. Our best thought displays intuition and insight, which may be why logical arguments rarely persuade anybody of anything.

“All the experimental evidence points to the fact that people aren’t rational,” says David Rumelhart, a cognitive psychologist at Stanford. “This is a simple fact that the rationalists refuse to accept.”

So how does one make a computer, which is based on logic, behave like a brain, which isn’t? The trick is to have lots of connections among the elements and then see what emerges when you turn on the machine.

The Pentagon is so interested in this work that the Defense Advanced Projects Research Agency is planning to spend as much as $400 million on it over the next few years. That is a big research project, even by Defense Department standards.

Advertisement

Early successes have been achieved in getting neural networks to perform tasks like reading aloud, which is still virtually hopeless for standard artificial intelligence. Giving the computer a lot of rules doesn’t work. There are always too many exceptions. Instead, Terry Sejnowski of the Salk Institute in San Diego has built a neural network that is learning to read aloud by experience.

How far this can be taken is anybody’s guess. The successes to date have been limited but tantalizing. Traditional artificial intelligence has had limited successes along the way, but it has not been able to generalize them. Will neural networks face the same fate?

Allman says: “Many researchers, frustrated by a lack of progress in artificial intelligence, are looking to connectionism for new inspiration. But while there may be many reasons for rejecting the traditional model of the mind, reasons for embracing connectionism are mostly hypothetical.” Neural networks are still at the “not yet” stage of research, which is the standard answer that most people give when embarking on a new area of study. It is the answer that artificial-intelligence researchers have been giving for decades about when they could, for example, get computers to see or understand speech or read a novel and summarize its plot.

Perhaps neural networks will provide the key to machine intelligence. Allman’s book is hopeful but not breathless about the prospects.

Advertisement