Advertisement

A New Breed of Computers : Mini-Supers at Cutting Edge of Technology

Share
Times Staff Writer

The computer industry thrives on the knowledge that the more some people get, the more they want.

Computer makers have been delivering more with astounding regularity, producing ever-smaller computers that hold more information and work faster for less money.

From arcane government laboratories and university research centers, computers moved into businesses, elementary school rooms and even into middle-class homes. They run factories and control skyscrapers’ energy systems, turn out payroll checks, turn on dishwashers and even help build other computers.

Advertisement

From the seemingly insatiable quest for computing power, a new generation of computers was spawned--magnificently powerful supercomputers that can cost as much as $17 million and do millions of tasks per second.

Still, some people aren’t satisfied. They want to predict this afternoon’s weather, find oil faster, design safer cars and planes, learn why stars die and solve the mathematical riddles of an atom’s nucleus.

“The present computing power is woefully inadequate,” said Kenneth Wilson, a computer scientist who directs the Center for Theory and Simulation in Science and Engineering, Cornell University’s supercomputing center.

Wilson said that “to even get started” on problems involving the smallest particles of the universe, he would need a computer that does 40 billion operations per second.

Stretching the performance level to even a fraction of that speed and making the technology commercially affordable is the heady challenge facing makers of tomorrow’s computers.

Few can command the massive resources needed to make even one of the giant supercomputers and challenge the dominant company in that market, Cray Research.

Advertisement

Instead, they’re making smaller machines, called mini-supercomputers (or sometimes “Crayettes”) that fill the power gap between the 150 or so supercomputers in use today and the thousands of large mainframes at work in businesses throughout the world.

They are finding willing buyers among science and engineering companies that can’t spend $15 million for a Cray--or that have a supercomputer and want the mini-supercomputers to siphon off some of the workload. Most mini-supers cost about one-tenth the price of supercomputers but claim to deliver as much as 40% of the speed and power.

It’s a fledgling market so far, but researchers predict mini-supercomputer sales of $1 billion by 1990. Dozens of earnest new companies have sprouted in all parts of the country, particularly in regions eager to repeat the success of high-technology havens such as Northern California’s Silicon Valley and the Boston area’s Route 128.

From the woods of Oregon to the wide-open spaces of Texas and nestled in the rolling hills of Wisconsin, companies such as Floating Point Systems, Scientific Computer Systems and Convex have been working to establish their position in the mini-super market.

Some of the companies have been lucky enough to have their machines chosen for use at one of the half-dozen or so university supercomputing centers funded by the National Science Foundation.

Even so, the market is sure to get tougher, especially now that IBM has made its first entry, a souped-up version of the 3090 mainframe.

Advertisement

Expects Market Shakeout

“There will be a lot of demand for mini-supercomputers,” said Larry Smarr, director of the National Center for Supercomputing Applications, one of two supercomputer centers at the University of Illinois.

“But there are far more companies than needed to supply the demand, just like there were in the personal computer field. I expect most of these companies to die. The market shakeout phenomenon will not escape mini-supercomputers.”

Smarr believes that IBM “will own a good fraction of the market” for mini-supercomputers.

And a major determinant in the survival of these companies is the technology itself. Many mini-supercomputers are being based on different strategies, including new materials, new software languages and new computer designs, which attempt to push beyond the physical limits of today’s technologies. A winner will be slow in emerging, experts agree.

“The whole battle,” said Ron Gruner, president of Alliant Computer Systems, an Acton, Mass., maker of mini-supers, “is to take new technologies and exploit them as efficiently and quickly as possible.”

That the old ways are running out of steam is a given.

“It is getting tougher and tougher to make an individual computer faster and faster . . . by shrinking the size of the components,” Smarr said. Miniaturization was the key that helped scientists pack the power of a roomful of transistors onto a fingernail-sized piece of silicon and put computers on desks.

Putting the circuits, connectors and resistors closer together helps reduce the time that it takes for the circuits to pass information between themselves.

Advertisement

But with the sizing-down process comes other problems, such as coping with the intense heat given off by closely packed circuits and creating sterile working environments and precision instruments for making the tiny components.

It’s a world, says Smarr, of “things smaller than a micron, where the human hair is a giant.”

Slackened the Pace

“We’ve had one really long ride with one idea--that is, making things smaller,” Smarr said. “We got used to every year or two making things two times as fast. And we can’t keep up that pace just by making things smaller. So we’ve slackened up the pace, and now we are looking for other ways.”

Using gallium arsenide instead of silicon as the base material for computer chips is one. Vitesse Electronics, a Camarillo-based company that hopes to introduce its first mini-supercomputer within 18 months, already is committed to using the new material in future models of its machines.

Other methods center on rearranging the way that computers approach problems. Most computers operate by breaking down problems into smaller units. In the kind of computers that IBM built in the 1950s and 1960s, for example, a single processor solved one problem at a time.

In that kind of computing, it would be as if one student were taking a long examination in which a teacher would give him many problems, one at a time. He would have to walk up to the teacher’s desk to get the first problem--say, an addition problem--walk back to his desk to solve it and then go back to the desk for the next problem--a multiplication task, perhaps.

Advertisement

On and on he would trudge back and forth between the teacher’s desk and his. Moving the desks closer together--as circuits are moved closer together on a chip--would reduce the time that it took for him to finish the exam.

Took Different Approach

In 1972, Seymour Cray left Control Data with an idea for a different approach and founded his own company, Cray Research, to make supercomputers. Cray’s breakthrough was that he built computers in which a single processor could solve many problems at a time, a method called vector processing, which groups information into aggregates of data.

Applying the vector processing method to the example of the student taking the test, the teacher might give the student all of the addition problems to solve at once before the student would have to return to the teacher’s desk for a set of multiplication problems.

Cray has sold 124 supercomputers and dominates a market that includes Control Data, Digital Equipment and three Japanese companies--Hitachi, Fujitsu and NEC.

But vector processing also has its limits. “A vector machine can only deal with very rigidly organized problems,” said David Kuck, director of the Center for Supercomputing Research and Development at the University of Illinois.

“In vector processing, you have to line up a lot of data in a row, so you can do the same operation on each piece. . . . You can’t make the machines go faster than an individual processor, and you get no more speed than you get out of (computing) in the traditional way.” Some problems can’t be easily translated to vector processing. In the example of the student taking a test, it might be that he needs the results of a multiplication before he can do an addition.

Advertisement

The rigidity of vector processing “is forcing people to look at parallelism,” said Kuck (pronounced cook).

In parallel processing, many processors work on different parts of the same problem at once. It would be as if the long examination were broken up into parts and several students were given different multiplication tasks to solve simultaneously.

Uses Fewer Processors

Proponents say parallel processing allows a computer, in some cases, to work at many times the speed of supercomputers. It also enables some of the mini-supercomputers to have far fewer, or far simpler, processors than supercomputers and still work almost half as fast.

Computer companies believe that parallel processing eventually will allow speeds and precision necessary to solve complex problems endemic to the aerospace, petroleum and auto industries, among others.

Three-dimensional modeling, in which computers can simulate an object or geographical formation, is one kind of job that entails vast numbers of calculations.

A technique called 3-D seismic exploration, for example, can help oil companies locate new wells. And 3-D reservoir modeling can help the same companies determine how much oil remains--and where--in an existing field.

Advertisement

Already engineers and scientists are using computers for work such as that, but the limitations of current computers--even as powerful as a Cray--often mean less-than-exact results.

With vastly improved performance levels, scientists and engineers will be able to figure in more and more possibilities or conditions. For example, there is an infinite variety of ways to crash automobiles, and hundreds of factors--road conditions, weather conditions, time of day, the speed of the car being crashed and other cars involved, the weight of the vehicle and driver, and so on--that contribute to the outcome.

With increased computing power, auto engineers can simulate more possibilities and then measure the effects on the passengers and the parts of the cars.

Huge Potential

Scientists say that supercomputer research of chemicals, biological systems and astronomy, among other areas, will help unlock the secrets to untold advances.

Many companies believe that parallel processing holds the key to such enhancements in power. At the University of Illinois, Smarr is less convinced of the immediate applications of parallel processing than his colleague Kuck, but he agrees that it is “a logical idea.”

“Anytime we have more work than an individual can do, we say let’s get a team to do it,” he said. But, he cautioned, not all problems lend themselves to parallel solutions. “If each (member of the team) does the same thing the exact same way, that’s not creative and only works for a very small number of problems whose natures are such that they can be divided into separate sections,” he said.

Advertisement

But plenty would disagree with him.

Gruner, a former Data General designer and one of Alliant’s three co-founders, is certain that “parallel processing in the scientific marketplace is going to be the generic technology.” Alliant’s machines combine the two kinds of processing.

Problems “that can be vectorized, we can run as vector operations. . . . Where we go beyond that is, the ones that don’t run on vector processing at high speeds, we run at parallel.”

Alliant, which now has sold 18 of its FX/ series computers, “is very intriguing,” said Jeffry Canin, a securities analyst who follows mini-supercomputer companies at Hambrecht & Quist in San Francisco, “because the system is truly a parallel processor but offers the ability to take existing applications in Fortran (language) and run them with little modification.”

Use Different Language

Some kinds of parallel computers, called massively parallel or highly parallel, do not have the ability to run programs written in Fortran, the predominant scientific language for computers.

One such machine was introduced earlier this month by Floating Point Systems, a Beaverton, Ore., company. The new massively parallel computers, which it calls the T Series, are a new direction for Floating Point, which in 1985 captured about 60% of all mini-supercomputer sales with its smaller machines, which speak Fortran.

The T Series, however, does not. The machines run on Occam, a scientific language developed in England. Many outside the company believe that this language difference will be a big barrier to the success of Floating Point’s T Series supercomputers.

Advertisement

In order to take advantage of the increased performance of the T Series, a computer user would have to rewrite programs already written in Fortran. For some extensively detailed problems, that could mean abandoning years of work.

“Floating Point Systems has a real challenge ahead of it to develop software,” said Hal Feeney, a supercomputer industry analyst at Dataquest. Many scientists are skeptical that the increased performance will be worth the investment of time.

Floating Point executives heartily disagree.

“For 10 times the performance, people will rewrite the software,” said Lloyd Turner, president and chief executive of Floating Point. “For 100 times, they’ll jump at the chance.” Turner said that for some specialized problems, the biggest of the T Series computers will work 260 times faster than the Cray.

Cornell’s Wilson, who uses some Floating Point mini-supers and was one of the first two customers for a new T Series, believes in the massively parallel machines.

Higher Capabilities

During the planning for the supercomputer center, he said, “we saw parallelism coming on quickly as a way toward enormous increases in performance and also be cost effective. Now, with Floating Point Systems’ (T Series), we’ve got the potential for higher capabilities than vector supercomputers.”

But Wilson said the Cornell center will be working to develop Fortran applications on the massively parallel system. “I’m sure,” he said, “there will be long and lively debates over whether Fortran, Occam or C (yet another language) will be the right language.”

Advertisement

Analyst Canin said parallel processing is “still in the exploratory stage. It will be three to five years before we see widespread acceptance of parallel processing. And any vendor who’s betting the near term, over the next year or two (on parallel processing), is in for a disappointment.”

Turner agrees, and said that “we’re not betting our company on this. Our main business is still” the mini-supercomputer line. Eventually, though, Floating Point expects that the massively parallel machines will produce as much revenue for company as do the other computers.

Several other companies are using the same kind of massively parallel architecture in supercomputer machines. A new parallel processor was introduced earlier this month by Intel Scientific Computers, a division of semiconductor maker Intel that is a neighbor of Floating Point in Beaverton. Intel said its new computers run at supercomputer speeds for mini-supercomputer prices.

The machine that Vitesse has under development will have multiple processing abilities, in which either vector or parallel processing can be used, depending on need and the design of the problem. It will perform 150 million operations per second and cost about $125,000.

Cray Uses Techniques

Cray, the supercomputer leader, also is incorporating parallel processing techniques in its machines. Like other multiple processors, Cray machines allow all four of the processors to work in parallel or, for instance, two of the processors to function as vector processors and the other two other to be working parallel applications. Only one Cray supercomputer is currently being used as a parallel processor--at a weather forecasting center near London.

“To meet user demands for higher and higher levels of performance, the usable physics can’t get you there,” Cray spokesman Robert Gaertner said.

Advertisement

“Such as--the physics of electronic components . . . just how much density can you pack on a given chip? So you try to change the base materials (silicon). Those kinds of things perhaps give you a 50% or even 100% performance improvement. But the marketplace wants 10 times the performance right away and 100 times within five years.”

That, said Gaertner, means that “we have to work on parallel architectures.” And Cray is doing that, he said, but “is going down a different path” from companies like Floating Point Systems.

Cray will continue to dominate the supercomputer markets, most analysts believe. The Japanese companies have had a difficult time establishing a foothold in the U.S. market, in part because of the acknowledged strategic importance of supercomputing and the reluctance of the majority of buyers, government agencies and government-sponsored programs to support Japanese efforts in the market.

Most mini-supercomputer companies admit that they won’t present much of a challenge to Cray. Smarr doesn’t think they want to, especially those who successfully sell other levels of computers.

“IBM wants Cray to stay where it is,” he said, “because Cray is the carrot that keeps IBM’s customers asking for more power.”

Advertisement