Advertisement

Science / Medicine : Getting the Picture : The computer graphics techniques that produced dazzling special effects in ‘Terminator 2’ are also valuable research tools. They enable scientists and engineers to put complex data into a form they can visualize.

Share
<i> Baer is a free-lance technology writer in New York City</i> .

Audiences crowding theaters this summer for the blockbuster film “Terminator 2” have been emerging in awe, not just of Arnold Schwarzenegger’s biceps, but of special effects feats such as the T-1000 Cyborg’s liquid emergence from a hospital floor and smooth transition into human form.

But some engineers and scientists in the audience no doubt felt kinship with the cinematic special effects teams that have been the darlings of filmdom since the “Star Wars” days. These scientists are part of an emerging group who have adopted the same computer graphics techniques, not to dazzle audiences, but to get a much better look at the results of their own experiments and what those results mean.

The technique, known in the research world as “visualization,” has lent new meaning to the old adages that seeing is believing and that a picture is worth a thousand words.

Advertisement

Visualization is simple, at least in concept: Computer operators take numerical data and convert it into images with commands. The basic technique is not new--filmgoers who traveled the time tunnel in the movie “2001” saw the computer transform colored lines into time machine arcs.

But the refinements in recent years have been dramatic. In “Terminator 2,” for instance, computer animators took the technology a major step forward. They scanned drawings and photographs using devices that converted the pictures into the digital language of zeroes and ones understood by computers. Then, using computer commands, they distorted the images until the raw picture mutated into the T-1000 Cyborg. Finally they electronically superimposed the computerized sequence onto a normally photographed scene.

What is also new is that cheaper, more powerful desktop computers are taking this technology from Hollywood into the laboratory, allowing scientists and engineers to better understand the results of their experiments, according to Harold F. Enright, president of Santa Barbara-based Wavefront Technologies, which produces visualization software. Such software converts digital data, representing numerical results of scientific experiments or simulations, into pictures. For instance, objects under increasing gravitational force might gradually turn from blue to red.

These technologies are now enabling fluid dynamics researchers in the aerospace industry to watch graphic simulations of air flowing over a wing design and see what effects an increase in airspeed or takeoff angle or a change in the shape of the wing itself will cause. Automotive engineers generate similar presentations using visualization to test the aerodynamics of vehicles still on the drawing board.

If the data were represented instead by pages of figures, the results would take much longer to interpret, says Val Watson, an expert in fluid dynamics at NASA’s Ames Research Center in Mountain View, Calif. According to Watson, it is simply quicker and easier to understand pictures representing what is going on.

But computer visualization in the laboratory is a notable achievement. In its earliest forms, scientists could take individual snapshots of experimental results, but until recently did not have computers powerful enough to animate them over time.

Advertisement

That is because visualization requires huge quantities of data. For instance, the data required to paint a still picture of a sports car is easily equivalent to that of a several-hundred-page book. Then add the data necessary to perform engineering problems.

To analyze the aerodynamics of a sports car, the vehicle must be broken up into grids involving hundreds of nodes, with complex stress analysis and fluid dynamics equations solved at each node. For animation, a series of hundreds of such individual pictures and technical analyses must be generated, a task that until recently required an expensive super-computer.

Today, a scientist can perform the visualization of the results on a powerful desktop computer and record the output on videotape. Furthermore, with today’s faster supercomputers, researchers can visualize the results of highly complex experiments in greater detail.

Way back in 1985--the Stone Age of visualization--it cost about $10,000 to churn out one second of computer-generated “footage,” recalls Wavefront’s Enright. Only the entertainment and the insurance worlds (where major aircraft or automotive accidents were recreated by computer) could afford these prices.

But by 1989, desktop computer advances had brought the cost of a finished second down to as little as $500. Besides scientists and engineers, architects have begun using visualization to paint highly detailed three-dimensional renderings of planned buildings that prospective clients can “walk through” via computer.

Furthermore, with faster, cheaper computers, users could solve a wider range of problems, or repeatedly manipulate their problems and retest the results.

Advertisement

Instead of simply recreating known events, scientists and engineers could now ask “what if” questions, such as what if the design of a sports car was changed and how would it handle in a windstorm.

This is beginning to aid in the design of fighter aircraft, says Watson of NASA. Today, plane designs are still generally tested the old-fashioned way, placing mock-ups and prototypes inside wind tunnels. But because those tests are time-consuming and expensive, aeronautics engineers are beginning to use visualization to ferret out the best designs before they ever get to the wind tunnel.

In many cases, adds Watson, computerized analyses using visualization may even be more accurate than wind tunnels because of the distortion caused by the wind tunnel’s walls during tests. Furthermore, computers can test certain types of aircraft designs that wind tunnels cannot, such as the proposed aerospace plane.

While some visualizations feature the same realism that has made the T-1000 Terminator this summer’s box office star, most do not. In fact, in many cases, visualization is used to generate objects that describe conditions that would otherwise be invisible.

An excellent example can be seen in research on the development of severe thunderstorms being done at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Champaign-Urbana.

As part of his long-term study of severe thunderstorm systems, Robert Wilhelmson of the NCSA wanted to learn more about how a storm’s wind and rain patterns affected its intensity and duration. The team also was looking for the intertwining of updrafts and downdrafts, known as rotationals. “We were looking for a kind of local rotation in which a tornado might develop,” he said.

Advertisement

Wilhelmson, members of the NCSA visualization staff and the university’s department of atmospheric studies used a Cray X-MP/48 supercomputer to mathematically simulate a two-hour storm based on observed data from an actual severe thunderstorm that swept across Texas and Oklahoma in 1964. The latest graphical representations were developed during 1989 and 1990.

The area covered by the simulation, 5,400 square kilometers wide by 16 kilometers high, was divided up into nodes spaced one kilometer apart horizontally and 500 meters vertically. At each node, mathematical calculations were carried out at set intervals to represent the changes in the storm over time.

To get a clear view of the air movement, updrafts were represented by orange ribbons, downdrafts were represented by blue ribbons, and areas of heavy precipitation were shaded deep gray.

As Wilhelmson and his team watched the visualization, the updraft and downdraft ribbons intertwined, as expected. “We were able to follow the air moving through the storm and then rotate the ribbons to represent how fast the air was rotating at that point. This gave us a real definitive picture of what was happening,” Wilhelmson said.

By seeing the storm evolve on computer, Wilhelmson was able to conclude that both updrafts and downdrafts had to remain separate if a storm was to persist. When he analyzed precipitation patterns, he found similar results. If it rained into an updraft, that would disrupt air movements and rob the storm of its energy.

Wilhelmson hopes to develop a more detailed simulation which he hopes will enable him to see the spawning of a tornado. He hopes that the ability to view, absorb and share with colleagues such vast amounts of data in this manner can eventually help meteorologists predict killer storms.

Advertisement

Despite the enormous advances that have been made in visualization during the past few years, the technique is still in its infancy. But strong interest and declining computing costs indicate a bright future. Today, says Enright, a company can get started in using visualization by investing as little as $25,000 in hardware and software. Frost & Sullivan, a market research firm, predicts that sales of graphics computer workstations capable of running visualizations will almost triple in the next five years.

But software developers will have to make their visualization programs easy enough to use that they will not require specialists, says Watson. “At Ames today, the scientist has the workstation right in his office, and he’s the one who creates the picture, not the computer graphics expert.” According to Watson, that is the only way to solve problems and do the necessary what-ifs in reasonable time.

For the moment, the T-1000s of the silver screen will hold sway in the public’s mind when it comes to computer visualization. But in coming years, the ability to use computers to draw animated pictures of the view inside Mother Nature’s bag of tricks will no doubt prove equally awe-inspiring and of far more value.

Drawing by Computer

Techniques of computer visualization allowed researchers at the National Center for Supercomputing Applications to see the relationships between precipitation and wind patterns in a simulated thunderstorm. Here’s how it looked.

* First, the overall area of the storm (represented by the wire-frame figure) is compared to wind intensities on the ground (shown by shadings of gray to black). This indicates that winds are highest in the southeast quadrant of the storm.

* Next, the water density of clouds (represented by countour lines) is compared to convective wind currents; that show winds swirling clockwise up and around the storm center.

Advertisement

* Finally, ground-level wind direction data is added, which shows both horizontal and vertical wind currents.

* After studying these and other views, researchers concluded that convective wind currents can easily feed or stifle a developing storm.

Advertisement