Advertisement

Supercomputers May Go the Way of the Dinosaur : Technology: Massively parallel processing is driving the behemoths towards extinction and is putting high-performance computing within reach of smaller companies.

Share
TIMES STAFF WRITER

Even in the arcane and often baffling kingdom of information processing, the supercomputer is an exotic species. The most powerful of them, built by Minneapolis-based Cray Research, carry out so many computations so quickly that they need a liquid cooling system to prevent a meltdown.

And they cost millions of dollars, putting them out of reach of all but the largest corporations and government agencies.

But a new technology is slowly driving these particular behemoths towards extinction. Massively parallel processing, as it is known, replaces the much faster but power-hungry central clusters of high-speed chips in a conventional supercomputer with hundreds or even thousands of personal computer-type microprocessors that simultaneously attack different bits of a big problem.

Advertisement

Machines that use this technique--while only in limited use so far relative to conventional supercomputers--recently gained the top spots in the rankings of the world’s fastest computers. In the long run, massively parallel machines promise not only to break more speed records, but also to dramatically lower the cost of high-performance computing, bringing the machines out of well-funded laboratories and into the hands of a broad range of commercial customers.

Indeed, the massively parallel concept can even eliminate the need for a separate supercomputer entirely in certain situations. A network of desktop workstations could be programmed to act as a supercomputer, with each of the separate machines functioning as a processor and solving a particular part of the big problem.

“I don’t think anyone disputes that massively parallel will be increasingly important, “ said Jeffry Canin, an analyst at Montgomery Securities, summing up the consensus at the Supercomputing USA conference here recently. “The dispute is about the timetable.”

Until very recently, many supercomputing experts believed that massively parallel processing was nice for experiments but was still many years away from practical application. Even today, the types of problems that have traditionally required supercomputers--geological analysis for oil exploration, aircraft and auto design, and intelligence analysis, to name a few--are still being carried out on Cray supercomputers or other “conventional” machines.

And many tasks that once required a Cray machine are now being handled by low-cost “mini-supercomputers” from companies such as Convex Computer Corp. of Richardson, Tex., or even with $100,000 engineering workstations from Silicon Graphics in Mountain View, Calif. These systems are expected to grow increasingly popular for many applications even though they can’t match massively parallel machines in raw speed.

Cray, for its part, agrees that massively parallel processing is an important trend, and intends to be part of it. But Cray officials say the trend will move slowly and there will still be a large market for conventional supercomputers.

Advertisement

The obstacle for massively parallel processing, quite simply, is software. It’s very hard to write programs that can efficiently divide big computing problems into bite-size chunks, deliver the information to hundreds or thousands of different processors and then retrieve the separate answers and assemble them into a coherent solution.

In the past two years, however, tremendous progress has been made in creating the “tools” needed to program massively parallel machines. And most believe that there will soon be enough ready-to-use programs to make these systems a practical choice for some commercial applications.

“For 15 years, we’ve had nothing but hardware,” said Jeffrey C. Kalb, founder and president of Maspar Computer Corp., a Sunnyvale, Calif.-based massively parallel computer company that recently received an equity investment from Digital Equipment Corp. and will supply systems to that company. “But in the last few years, we’ve learned to program these machines. Now we’ll see a massive explosion in the use of them.”

Parallel processing is making its first commercial inroads in the area of databases. Kalb says Maspar’s machines can sort through 8 million documents in a second, searching not only for a single word or phrase but even for more complex strings of data. Dow Jones & Co. uses computers from Thinking Machines Inc. of Cambridge, Mass., regarded as the leader in massively parallel processing, for its News Retrieval information service.

Over the long term, massively parallel designs promise to bring supercomputer power within range of the average business. Because they rely on relatively simple, low-cost microprocessors, massively parallel machines cost hundreds of thousands rather than millions of dollars, and prices are expected to drop rapidly as volume increases.

This will eventually give a structural engineer or a furniture designer access to the same computing tools that an aircraft designer has today. A small plastic company could figure out the optimal way to injection-mold a certain product. A truck-mounted computer could instantly solve oil field problems that now have to be analyzed for days in a laboratory.

Advertisement

And extraordinary computer power could become even more widely available if the concept of “network” supercomputing catches on. A Pasadena company called Parasoft is one of several firms working on software that would enable a network of standard engineering workstations--such as those built by Sun Microsystems--to function as a single parallel processing computer.

That could enable a company that already had workstations to avoid buying a supercomputer at all. Instead, supercomputing problems could be loaded onto the network, with each workstation solving a piece of the problem.

It’s not yet clear how popular network supercomputing will become.

“It’s at a pretty early stage,” said Dave Tolle, head of parallel computing research at Shell Development Co. in Houston. He noted that communication among the different workstations would be an insurmountable bottleneck for some applications, and that engineers wouldn’t generally appreciate having their personal workstations usurped for a supercomputing problem.

“People who have workstations want to use them,” said Gary P. Smaby, a Minneapolis-based supercomputer consultant. “It’s like asking them to trade a direct telephone line for a party line.”

Advertisement