Advertisement

Making Computers Work More Like Us : Patterned After the Brain, Neural Networks Show Promise

Share via
Associated Press

In the beginning, there were digital computers. They were big, slow and very stupid. Half a century later, digital computers are very small and fast, but they’re still hopeless idiots compared to humans.

The main problem was, and still is, that computers just can’t think.

But what if computer circuits could be made to resemble our brains? What if they could learn from their mistakes?

Hundreds of tiny companies have sprung up in the past three years hoping to answer, and cash in on, those questions with an approach called neural networks--after the neurons that are the basic structure of the human nervous system.

Advertisement

Carlos Tapang, a 36-year-old Filipino physicist who left Intel Corp. to start Syntonic Systems Inc., shipped what he believes is the first commercially available neural network chip to Electrodyne, a Japanese company, in May.

Combines Signals

The chip is called DENDROS-1, a reference to dendrites, the highly branched filaments of nerve or brain cells that let the cells communicate. It simulates neurons by combining a variety of signals to come up with a single result.

Tapang compares it to measuring the flow of water through a vast network of different-sized pipes by emptying them into a single pool. Such a system of computing values is called “analog,” which implies a continuous form of measurement, like the sweep second hand on a watch.

Advertisement

Digital computers, like digital watches, chop time and numbers into tiny bits and add them up one at a time. In fact, “bits” are the smallest unit of information in the binary system that is the “brain” of a digital computer.

The great advantage of today’s digital computers is that they can add those bits incredibly fast, even if it is done only one bit at a time.

There is some evidence from research on animals and people that the neural cells in our brains use a kind of digital system to transmit signals. These on-off pulses, called spikes, confused early researchers and led some to believe that the brain relied on a digital model to process information.

Advertisement

Instead, they found that it was only a tiny portion of a complex electrochemical system that channels signals in the way that Tapang is trying to imitate. “A neuron by itself is dumb, but the totality of that network of neurons in our skull is probably the most intelligent machine in the universe,” he said.

The key word is “network.” It is the interaction of neurons in our brains that gives rise to thought, not the action of a single neuron.

In that sense, Tapang said, digital computers are doomed to be electronic dunces because their chips were designed to be solitary devices called central processing units, channeling all operations through one electronic “pipe.”

Even if CPUs are connected, the result is only a faster dunce.

However, some computer scientists and industry analysts are skeptical.

Neural networks will “be a fine addition to what we’ve got now but they’ll just supplement it,” said Esther Dyson, editor and publisher of Release 1.0, a New York-based computer newsletter. “They’ll never replace the mathematically precise logic of a digital system.

“I fundamentally disagree with this ‘baited breath’ attitude in the industry about the next breakthrough being neural chips or networks. They’re good at things like pattern recognition but still cannot cope with our kind of ‘fuzzy thinking.’ A lawyer might ask, ‘Colonel North, why did you do that?’ and accept a fuzzy answer, but a computer can’t.”

One San Jose-based company, Synaptics, has used neural network technology to develop what it calls a Silicon Retina. An array of photo sensors emulates the light receptors in the eye, and an analog computer processes the image for display on a video monitor.

Advertisement

Tapang uses capacitors to simulate neurons, which rely on chemicals to transmit signals between synapses. Capacitors store and release electricity in much the same way. DENDROS-1 has one fixed connection and 22 variable ones that simulate synapses. It can be layered with other chips to create an overlapping network of communicating capacitors that “fire” signals to each other.

Best Applications

As in the brain, the firing of one electronic neuron affects the others. An important feature is each neuron’s ability to damp its neighbors, creating a pattern. The variable interaction is a crucial part of the network.

The applications for Tapang’s chip include things like pattern recognition.

Dyson agrees that the best uses for neural chips and networks would be tasks like analyzing seismic patterns to predict earthquakes and to hunt for oil or gas, analyzing brain waves, identifying fingerprints or other classification chores.

But she remains skeptical about their potential. So does Dave Waltz, of Thinking Machines Corp. in Cambridge, Mass., which makes a supercomputer built of up to 64,000 digital processors harnessed together.

The problem with neural networks is that you can’t keep expanding them, Waltz said. “The whole field of neuropsychology says that increasing the scale means greater complexity. If you scale up the size of a neural net you increase the complexity of solving the problems you give it.”

Advertisement