Advertisement

Book Review : Putting on Thinking Caps Over Artificial Intelligence

Share

Silicon Dreams: Information, Man and Machine by Robert W. Lucky (St. Martin’s Press: $19.95; 411 pages)

The truth about artificial intelligence is that if you severely and artificially restrict a problem, you can get computers to do interesting things that look like thought. But if you allow a large area of discourse--the real life situation--computers don’t know what you’re talking about.

For example, computers can “understand” a limited vocabulary spoken by speakers they have heard before. But they can’t make head nor tails of continuous speech on any subject by any speaker--something that you and I do all the time without thinking about it.

Advertisement

To be sure, computers have extraordinary skills of their own. They can rattle off the Los Angeles telephone directory time after time without a mistake, which is far beyond the memory skill of any human. But they cannot distinguish one face from another, which even a baby can do. Or a dog.

It comes down to this: Computers are very good at rote tasks that can be clearly defined and specified. They are not good at anything requiring judgment.

Permanent Limitation?

Whether this is a permanent or a temporary limitation is a matter of dispute and, like most disputes, a matter of the temperament of the disputants. Optimists argue that in time--perhaps a long time--computer “thought” will be indistinguishable from human thought. Computers are getting faster and more powerful all the time. Sooner or later the goal will be achieved.

On the other hand, realists argue that the “brute force” approach to mimicking thought can never be completely successful. You can’t reach the moon by learning to climb trees.

This subject has been brooded about at length in recent years in popular and technical literature. I had thought there wasn’t much more to say about it until I read “Silicon Dreams” by Robert W. Lucky, a book that links artificial intelligence to information theory and in the bargain elucidates both.

Lucky, who is executive director of research at AT&T; Bell Laboratories, has written a brilliant exposition of information science, full of fascinating details about machines and people and the different ways they know things. He also provides the theoretical underpinning for this work and the broad philosophical conclusions that can be drawn from it. The book embraces both the forest and the trees--and then some.

Advertisement

Work of Genius

Information theory was created in 1948 by Claude Shannon in a paper entitled “A Mathematical Theory of Communication.” Lucky says flatly of Shannon’s paper: “I know of no greater work of genius in the annals of technological thought.” He goes on to explain why this is so and to show how information theory applies to the understanding of text and speech and pictures by computers as well as by people.

On a broader level, Lucky writes about information itself and of the difficulties of dealing with the massive amounts of data and knowledge that the world is churning out. “The real task of the information age is the selection of which information we take in,” he writes.

On a still broader level, he writes about the inherent problem of knowledge: How do we know what we know? Is the world knowable, or does it just look that way? Are randomness and chance the basic explanatory principles of the universe?

“There are two reasons why a probabilistic model of the world is often used in physical sciences,” Lucky says. “First, it acknowledges our ignorance of the true origins of the quantities involved and the minutia of their interrelationships. In effect, we say that we do not understand all of these details, so we pretend that the outcomes that we see--for example, the symbols that designate messages--are random.

‘A Powerful Tool’

“The second reason, however, is even more important. Probability theory is a powerful tool for analysis. Often when we are unable to calculate even one particular special case of known events, we find that we are able to calculate the average over many unknown, but random, events. So it was in information theory. Shannon’s remarkable results are due in large part to the assumption of a random model of information generation.”

Lucky notes that in the early years of information theory, there was widespread belief among its students that it would solve all problems. This initial euphoria was followed by a more sober view, which in turn gave way to a “so what?” attitude, namely, how does this help us do anything? It’s an interesting way of looking at the world, but what benefit does it offer?

Advertisement

(One might ask the same question of chaos theory today. It’s a cute idea, but so what?)

The answer, at least about information theory, is that while it doesn’t solve the problems, it does make them clearer. Information theory is an elegant mathematical and philosophical tool that synthesizes many different ideas and provides a way to think about them. Alas, solutions are harder to come by.

That’s the good news. The bad news is that “Silicon Dreams” is not an easy book. Parts of it are somewhat technical and might profitably have been placed in an appendix without harming the text. Even the non-technical parts require your undivided attention. Having read it once, I wouldn’t want to be tested on everything that’s in there.

“Silicon Dreams” is not a layman’s guide to information theory. But it comes close.

Advertisement