Philosophers have long been fascinated by the workings of the mind. Cognition, perception, memory and understanding have been fertile ground for conjecture and analysis since the time of Aristotle. Scientists have approached the workings of the brain from several perspectives. Neurobiologists attempt to understand the workings of individual nerve cells, called neurons, and their organization into larger structures in the brain. Psychologists and cognitive scientists quantify aspects of behavior, perception, and memory, elucidating general principles of organization and interrelationships.
Finally, computer scientists, especially those engaged in artificial intelligence, attempt to construct computer programs that do some of the tasks at which humans excel. Practitioners of all these disciplines are represented in “Neurocomputing,” a collection of original scientific papers concerned with computational aspects of neural systems.
The phrase neural system refers to any system whose components are either biological neurons or resemble biological neurons in some way. The papers in “Neurocomputing” treat neural systems as diverse as a silicon computer chip that mimics a retina, an abstract mathematical theory of disordered crystals, and a computer program that can learn to pronounce English words, as well as the visual system of a horseshoe crab, and a human brain. Generally, the neurons in an artificial neural system resemble real neurons in the sense that they respond in a regular and simple way to stimuli, and they interact so that the activity of one influences that of many others.
Biological neural systems display some remarkable properties. In a conventional computer, if one transistor or micro-chip fails, the entire system, or at least a significant sub-system, remains inoperable until the component is repaired. By contrast, in the human brain, thousands of neurons die every day, yet we continue to function at more-or-less the same level of competence. Fault tolerance is one of the features of neural systems that has generated a great deal of excitement. Computers constructed from highly interconnected electronic analogs of neurons show some promise of graceful degradation, rather than catastrophic failure, in the event that a few components fail.
The ability to deal with incomplete, noisy, incorrect, or conflicting inputs is another aspect of neural systems that is cause for enthusiasm. Our own abilities to follow a conversation in a noisy room or to read smudged, irregular handwriting are well-known aspects of human perception. It is hoped that artificial neural systems modeled after our own auditory and visual systems might also capture some of our extraordinary abilities in this area.
Neural systems are not without their drawbacks, however. Humans have a tendency to forget infrequently used facts and to confuse similar concepts; we are very poor at precise logical and arithmetic reasoning, tasks that conventional computers perform superbly. Artificial neural systems display many of the same types of undesirable behavior. Indeed, it is occasionally cited as a “success” when an artificial neural system makes mistakes reminiscent of those of human children, for example, in the acquisition of language. It suggests that the artificial system has truly captured important aspects of the way humans perform the same task, for better, for worse.
Another obstacle neural systems need to overcome is the difficulty in programming them. In fact, concepts like “programming” and “software,” which were invented to help manage conventional computers, might not be the right ones at all to apply to the design of neurocomputers. Unfortunately, viable alternatives have yet to emerge. Conventional computers today benefit from almost 40 years of development in both hardware and software. It is unreasonable to expect that the neural system under development in a research laboratory in 1988 should be as easy to direct as a personal computer. We should keep in mind that major breakthroughs were necessary to bring conventional computers to the level of programmability to which we are accustomed today. Entirely different, but equally profound, advances in the mechanisms for designing and controlling neural systems must be made before their full potential can be realized.
The contribution of “Neurocomputing” is two-fold. First, it collects in one place most of the seminal articles that have given rise to a growing branch of science and engineering.
This is an invaluable aid to the researcher and the highly motivated layman, especially since the field is highly interdisciplinary, and the collection would require considerable legwork to complete on one’s own. Second, it shows us where we are, and how far we have yet to go in understanding and using neural systems.
Neural systems have been the subject of considerable hyperbole in recent years. Claims made for sixth-generation computers able to think and act like humans rely on highly optimistic extrapolations of current technology. As the papers in “Neurocomputing,” which cover the state of the art up to 1987, make abundantly clear, there is much to be excited about, but the field is still in its infancy. Fundamental questions have yet to be asked, much less answered.