Advertisement

Tomorrow’s Computer to Think, Act--Just Like the Brain It’s Based On

Share
United Press International

Out of the primordial soup that biologists say produced the first living organisms rose the rudiments of a three-pound human organ that evolved into the source of passion, politics, mathematics and crime.

With its benefits of several million years of evolution, the human brain has become the model for a new kind of technology--neural network computer systems that think, learn, see, hear, forget, remember, sleep and dream.

These would be the ultimate electronic system--one that does not have to be programmed and is capable of computing functions comparable to those of a biological brain.

Advertisement

“We’re studying the brain to expand our definition of what a computer is,” said Michael Arbib, a neurobiologist and computer scientist at the University of Southern California. “Without a doubt, these will be the computers of the 21st Century.”

Able to ‘Learn’

Like their counterparts in nature, neural networks are endowed with a sophisticated system of “nerves” that can transmit messages. The network is able to “learn” any information needed to perform tasks and make decisions.

Scientists, like Arbib, however, are quick to caution that neural network technology, which grew out of research into bionics in the 1960s, is nothing like computer science as it is known today.

In laboratories throughout the United States, scientists are designing systems with special functions--particularly an ability to learn--with the hope of one day merging that capability with sophisticated robotics.

Machines so equipped would be able to assume special work duties, primarily in and outside space stations and at nuclear reactor sites, performing duties often too hazardous for people.

Hunting New Clues

“Many of the subtle processes of the mammalian brain are still unknown, so we are continuing to study the brain to acquire new clues about behavior and adaptation, and from that, perhaps, develop a better understanding of what can be done with computers,” Arbib said.

Advertisement

Scientists from such diverse fields as mathematics and physics are working on projects with neuroscientists and computer specialists to produce the first machines that will aid in the colonization of space stations and distant planets.

A milestone in the as yet primitive evolution of robot vision has been achieved at USC with a machine that is able to perceive “edges”--subtle differences in light intensity. Other research centers are developing machines that can recognize faces and voices.

Because neural networks do not have to programmed, scientists say they can be used in situations with conventional computers, permitting the neural network to decide when and what to program into the conventional machine.

Cell-Like Microchips

One key to designing a computer that can operate on principles similar to those governing the brain and central nervous system is development of an internal network of microchips that function like nerve cells.

Researchers at Caltech and Bell Laboratories in New Jersey have developed such a microprocessor that serves as a component of a silicon nervous system. The network’s design is based on rules of nervous system organization found in higher vertebrates, all of whom are capable of processing thousands of signals at once.

“At first glance you wouldn’t be able to tell this chip from any other,” said Caltech biologist and computer scientist John Hopfield, describing the silicon microprocessor. “This is not like a conventional chip which receives input from only two or three others.”

Advertisement

Instead, the chips, with a design encompassing Hopfield’s theory of “associative memory” will eventually permit a neural network to process “input from different layers of the computer simultaneously.”

Different Process

“In neurobiology, each neuron gets input from literally thousands of others,” said Hopfield. “But computers as we know them now approach problems sequentially, using algorithms. Human problem-solving is completely different.”

An algorithm is a method of solving a problem one step at at time.

Hopfield said humans solve problems in a “heuristic” manner--hit and miss and by rule of thumb--making decisions about all known possibilities to arrive at the best answer, essentially what associative memory chips can do.

The difference between man and machine, however, is that a neural network would perform the task infinitely faster and with greater accuracy. Scientists can achieve that capability by teaching the system much in the same way that a child is first taught to use language.

Recognizing Sounds

“At Bell Labs, David Tank has used neural networks to perform phoneme recognition by picking out a particular sound and not being confused by junk signals coming in at the same time,” said Hopfield.

A phoneme refers to the phonetically similar but subtly different utterances that are heard as the same sound in any language. In English, for example, the phoneme “d” is heard as the phonetically differentiated sounds in the words “dad,” “mode,” and “branded.”

Advertisement

Hopfield said the computer learns the different sounds of all letters until it is as capable as a native speaker of using and understanding words that incorporate even the subtlest sounds of any given phoneme.

At Johns Hopkins University a similar neural network, equipped with a voice synthesizer, is being taught to speak and read texts in English, repeating pronunciations until perfect understanding and pronunciation are achieved.

Never Equal the Brain

When a neural network makes a mistake, it is corrected until it learns the words correctly. But even with such precision, scientists say machines will never completely match the functioning of the human brain.

“The engineering of the human brain is absolutely staggering,” said neurobiologist Gary Lynch of the University of California, Irvine, who also is participating in neural network research.

“Nature has 400 million years on what science has been able to duplicate in a much smaller context in only 30 years of trying.”

He said that it would be virtually impossible to duplicate all of the functions of the cerebral cortex, “the most complicated entity in the universe.”

Advertisement

Using the Magic

Lynch noted that scientists can translate into silicon only a few of the complicated functions that the brain seems to perform effortlessly and cited Caltech experiments on a silicon eye and ear.

“Science is stealing tricks from the brain,” he said, pointing out human brains “do the magic they do from the cortex. We’re taking just a small piece of that magic and building it into silicon.”

The cerebral cortex is the outer layer of the brain and in humans is believed responsible for functions including language, reasoning, creativity and culture. Other innermost parts control emotion and body systems.

USC neurobiologist and computer scientist Michael Arbib said robots equipped with neural networks will not go through repetitive motions as “dumb robots” used in industry currently do. Instead, they will diagnose problems and arrive at solutions.

Do It With Sensors

“They will have to have sensors so that they can perceive what the problem is and determine on the spot an appropriate plan of action.

“If robots are to play a sophisticated role and perhaps make a major contribution in space colonization, they will need senses of vision and touch to perceive relevant facts about their environment.

Advertisement

Arbib is developing a theory of how a brain--mechanical or biological--perceives its environment and processes the information it needs to guide its action.

“I call this concept a theory of schemas (units of knowledge) in which each schema represents familiar information that helps the brain deal with the unfamiliar. The human brain makes sense of a new situation, say, driving an unfamiliar car by drawing upon prior schemas,” he said.

Useful in Home

Tony Materna, an engineer and marketing specialist for the Hecht-Nielsen Corp. in San Diego, one of a growing number of firms attempting to build neural computers, said one goal is to eventually build such machines for use in businesses and homes.

“We think that by end of this century there will be one of these computers in every place where a conventional computer is now used, or at least working alongside a conventional computer,” Materna said. “It’s possible that neural computers will decide what to program into a conventional computer.

“In business, in the near future, you’ll simply have to speak into a word processing unit and it will write letters and reports based on what is spoken into the machine.”

Materna predicts that by 1995, for the price of a mid-size car, people will be able to have permanent live-in maid and car-repair services with the purchase of a “household unit.” Such a unit, or android, would learn to do dishes, laundry and vacuuming as well as washing the car or repairing it.

Advertisement

And what if more help is needed around the house?

Why, the android would be capable of building a clone of itself, said Materna, ready to paint the house, move furniture or repair an ailing television set.

The only trick to gaining the service would be teaching the machine to perform the job.

Advertisement