Advertisement

Book Review : Reflections on Brains, Computers

Share

The Society of Mind by Marvin Minsky (Simon & Schuster: $18.95)

Language translation was an early goal of computer science, but not much is heard about it any more. The reason is simple: It doesn’t work.

The reason it doesn’t work says a lot about computers and languages and minds. It turns out that even if you have a comprehensive dictionary and a complete description of the grammars of both languages, translation cannot be done mechanically because the meaning of a sentence is not fully contained in its words.

There is something else, something extra, that a speaker and listener (or writer and reader) bring to language to fix its meaning. Without that something extra--context, if you will--the ambiguities of most words leave meanings unclear. Within the context, human speakers generally have little trouble figuring out what’s being said. It’s so easy that we don’t think about doing it. We hear other people talk and we immediately know what they mean, and that’s that.

Advertisement

Common Sense

Computers, on the other hand, have no idea of context, and no computer scientist has a very good idea about how to give it to them, certainly not in any general, real-world situation. People learn things about the world throughout their lives and bring this “common sense” to bear automatically in interpreting what they hear. Computers don’t, and at the present state of knowledge, they can’t.

So if a person hears a sentence like, “The astronomer married the star” (to use one of the author’s examples), he doesn’t spend much if any time wondering if the word star refers to a celestial body seen in the nighttime sky, even though the word astronomer might lead a machine in that direction. Humans know that marriage involves people and celestial bodies are not people, and that the sentence probably means that the astronomer married a successful actress.

Computers are not nearly as clever. The question of how to give computers this kind of knowledge and how to give it to them in usable form has occupied the practitioners of artificial intelligence for some years now. It’s not surprising that if you want to get machines to think like people, you should find out how people accomplish this trick.

And this, in turn, has led to theories of mind and how the brain works. Marvin Minsky of MIT, a pioneer in artificial intelligence and a man of wide and varied knowledge, has developed a theory, which he spins out in this book.

In Minsky’s view, the mind is made up of agents directed to particular tasks, which oversee smaller subagents, which oversee smaller parts of the tasks at hand. Organization is the key to the whole enterprise. Hence the title, “The Society of Mind.” It’s the constantly shifting organization of agents and subagents that gives the mind its versatility and depth.

Agents and Subagents

Thus, a child’s agent Play can turn on Play-With-Blocks (rather than Play-With-Dolls or Play-With-Animals), which in turn activates See, Grasp, Move and a host of other subagents who control various parts of the activity of building things with blocks. All the while, Play is holding off Sleep and Eat, which want to take control of the child’s mind and get it to do something else.

Advertisement

Minsky shows how his theory accounts for learning and language and long- and short-term memory and perception and emotion and many other things we are familiar with as activities of mind. It’s not clear whether his theory is a model, a useful analogy as it were, or whether he thinks the structures he proposes actually exist in the brain.

But he does make an intriguing argument, which he supports with a bevy of fascinating examples while acknowledging that hardly anyone is very good at thinking about thinking and no one has any real sense of what’s going on inside his head while he hatches ideas or talks to his neighbors or watches television or does anything else for that matter.

Some of the best parts of Minsky’s book are the paradoxes he alludes to in thinking about thought. “Isn’t it amazing that we can think, not knowing what it means to think?” Minsky asks. “Isn’t it remarkable that we can get ideas, yet not explain what ideas are?” On the other hand, though, it’s not so amazing that we can drive cars without knowing how an internal combustion engine works.

He is also aware that it is hard to think about the first principles of things because while these questions may be entertaining and engaging, they have never been answered, cannot now be answered and probably never will be answered. They recede into the distance forever. (I have specifically not asked for or proposed a definition of mind or consciousness or any such idea for exactly this reason. It doesn’t get you very far. I assume that we all know what we mean when we refer to mind. It’s the little voice inside your head.)

Minsky’s goal is to use whatever knowledge he can glean about the brain to help him build smarter computers. “We are still far from being able to create machines that do all the things people do,” he acknowledges. “But this only means we need better theories about how thinking works.”

Over the years, the pendulum has swung back and forth on this point. When they began this work, computer scientists thought the way to get computers to think was to have them mimic what the brain does. But this proved to be a very tough problem for the artificial intelligentsia for two reasons: One, they didn’t know how to build such a machine, and two, they didn’t know how the brain works in the first place.

Advertisement

A Different Concept

So the task shifted, and researchers said that computers did not have to function as brains do in order to carry out the same functions. After all, they said, cars don’t move the way people walk, and airplanes don’t fly the way birds do, so why should machines have to think the way people do? Humans have ways of accomplishing certain tasks, and machines have other ways of doing them.

But that hasn’t worked either. Machines as machines have not been taught to think. So now they’re back to trying to figure out how brains think, looking for a clue to how computers should do it.

In the end, this attention to the mind may turn out to be artificial intelligence’s major contribution to knowledge. It has spurred theories of the mind and brain. But hard as it is to figure out how the brain works, it may be a snap compared to getting machines to do all of the things that brains do effortlessly and without thinking about them.

Advertisement