Advertisement

Hand It to Them

Share
TIMES STAFF WRITER

Sometimes, a picture is worth more than a thousand words.

A picture of a cancerous lung tumor, for example, can be matched against a multimedia database and enable doctors to design a potentially lifesaving course of treatment. A video of a cybermicroorganism can help biologists explain how the real thing takes shape from a single cell. And a 3-D virtual car might even serve as a working model that a designer could shape with his hands.

These and other feats are made possible by advances in the burgeoning field of computer graphics. Founded in the mid-1960s by computer science gurus David Evans and Ivan Sutherland at the University of Utah, the discipline has moved far beyond mere digital pictures and begun to make major contributions to fields as diverse as medicine, radio communications, entertainment and architecture.

“Once upon a time, those of us in computer graphics used to paint pictures on screens, and all you could do was just look at it,” said Turner Whitted, who helped organize Siggraph, the annual conference of the Assn. for Computing Machinery’s subgroup for graphics. The conference takes place this week at the Los Angeles Convention Center.

Advertisement

Whitted said that today technological advances are such that users can roll up their sleeves and interact with computer graphics creations almost as if they were physical objects.

“The user himself is getting sucked into the screen,” Whitted said.

*

In simple terms, computer graphics technology makes it possible to visualize--and manipulate--massive amounts of data. For years, though, its development has been limited by the huge amount of computer horsepower required for even simple applications. Turning weather data into a visual rendering of a storm system, for example, could easily require a Cray supercomputer.

But with the continued exponential advances in the power of the microprocessor, these obstacles are falling away. A tour of university laboratories in Southern California and elsewhere reveals the remarkable breadth and potential of a field that is just now coming into its own.

At UCLA, computer science professor Wesley Chu is working with faculty from the medical school in Westwood to build a database of X-ray images. Each X-ray picture will be linked to a patient’s case history--including the course of treatment and its effectiveness--along with audio files of the doctor’s observations and video documenting the patient’s progress.

At first, this sounds like just a fancy multimedia version of the patient files that are usually kept on paper. But as digital files, these case histories are far more valuable because computers can search through them to find similar cases with clues about how best to treat an individual patient.

The key to making this work is computer graphics, Chu said. In order to search through the files of thousands of lung cancer patients, the computer must be able to recognize a tumor in an X-ray. That’s significantly more difficult than using a search engine to find a keyword in a text document, since words match precisely and pictures of similar tumors can actually look quite different.

Advertisement

Chu and his colleagues are teaching computers how to recognize features such as shape, size, volume, location and texture in a 3-D MRI X-ray image. Then the system, dubbed KMeD, can use those parameters to find tumors similar to one in a new patient. Physicians can review files of earlier cases before deciding how to treat the new patient.

“We’re not trying to replace physicians; we’re helping them make decisions,” Chu said. Across town at Caltech in Pasadena--one of five universities in the National Science Foundation-sponsored Graphics and Visualization Center--researchers in the Computer Graphics Laboratory are also developing techniques to squeeze more information out of X-rays made by magnetic resonance imaging machines. The MRI determines areas of densities within a body part, and researchers use that data to make pictures that show where the skin ends and the tendons underneath begin.

Body parts such as hands are especially difficult to draw because of their irregular shapes. And it’s hard to determine the exact boundary of two kinds of tissue, such as fat and bone. But Caltech researchers use vigorous mathematics to draw extremely precise pictures of hands and wrists. The next step is to apply this technology to hands in motion, professor Alan Barr said.

“Then we could figure out what hand and wrist postures are the least likely to produce carpal tunnel syndrome,” he said. Carpal tunnel syndrome is a debilitating repetitive stress injury.

Barr and his colleagues also use computer graphics to make models of microscopic organisms that can test biologists’ theories about how they grow. The graphics experts write programs so that the cells know the rules that govern how they can multiply, what they will collide with and how they can bond with each other. Then they run the program and watch to see whether the cells organize themselves into the shapes the biologists had predicted.

“We can do a simulation and see if the model works the way it does in real life,” Barr said. If it does, that affirms their theories.

Advertisement

At USC, computer science professor Gerard Medioni is teaching a computer how to draw a 3-D object based on two still photographs. Using pattern-recognition techniques and a form of mathematics called projective geometry, the computer can establish points in two pictures (such as corners and straight edges) that correspond to each other. Then it can tell how the object would look from any viewpoint in between.

*

Medioni, who is on the faculty of USC’s prestigious Integrated Media Systems Center, thinks this technology can be used to create virtual sets for movies and television. For example, producers could snap a series of photographs of the Kremlin’s interior and use this “viewpoint synthesis” technique to create an entire backdrop that could be synthesized onto a blue screen, he said.

Helen Na is using computer graphics to study the ionosphere, the outer part of the Earth’s atmosphere that is populated with electrons. Like a weather system, the ionosphere is constantly changing, especially in response to solar conditions.

That’s a problem for the military, which communicates with soldiers by bouncing radio messages off the ionosphere. If the shape of the ionosphere is not known precisely, the signal may not go where it is intended.

“You want to be able to design a signal so that it will go where you want,” said Na, an electrical engineering professor at UCLA. “You also want to be able to tell where a signal came from.”

Na uses data from 16 radio beacon receivers in the Caribbean to calculate the density of the ionosphere, hundreds of miles above sea level. With computer graphics, she constructs tomographic images of the ionosphere’s ever-changing shape. Denser areas are a different color than thin areas, which makes the maps more useful.

Advertisement

At the University of North Carolina at Chapel Hill (another member of the NSF Graphics and Visualization Center), researchers are using computer graphics to see microscopic structures--and even move them around. With a device called a nanoManipulator, a graphics workstation converts data from an atomic-force microscope into a realistic 3-D image that is magnified 1 million times.

Using the tip of the microscope, which is connected to a force-feedback device, UNC scientists have poked around at a virus that attacks tobacco plants to see what makes them stick to their victims. They have also used the nanoManipulator to push a 20-nanometer ball of gold (about 300 times smaller than the width of a human hair) into a tiny gap in a gold wire.

“This is a sandbox for physicists, biologists and gene therapists,” said Russell M. Taylor II, a computer science professor who has been developing the nanoManipulator for five years.

The range of potential applications for computer graphics is also expanding with the help of new types of equipment. At Stanford University, researchers are developing a “responsive workbench,” a display system that projects computer-generated images onto a table-like device with a 6-by-3-foot screen. The computer creates separate images for the left and right eye, and the user wears special shutter glasses to create the 3-D effect.

Although computer screens are suitable for many people, a workbench would be preferable for architects, surgeons, automobile designers and others who require a tabletop environment, said Pat Hanrahan, a professor in Stanford’s computer science department. Objects on the workbench--an idea that originated at the German National Research Center for Information Technology--can be grabbed, moved and rotated. Some items, such as individual bones from a virtual skeleton, can be linked to text or other information. Hanrahan and his colleagues are perfecting a workbench that two people can use simultaneously.

The workbench paradigm “is more natural,” he said. “We can just reach into the world and grab something and manipulate it. With a mouse and a monitor, it’s all indirect.”

Advertisement

USC’s Medioni is developing a pen that will draw a 3-D computer image when it traces a 3-D object. When the pen traces two sides of an object, the computer will automatically compute the shape of the edge that joins them.

Medioni has trained a computer to discern the curve of a surface at every point where the pen touches it. Then the computer uses those points to calculate the overall shape of the object, whether it’s smooth like a ball or lumpy like an oversized model of a molar.

Medioni can also use the pen to draw a shape freehand. To make a cylindrical rod, for example, he draws a circle in the air and then a line extending from the edge. The computer infers the rest. By keeping track of the angles between the joints of the pen, the computer can determine exactly where the tip is in 3-D space.

A pair of electrical engineering professors at UCLA are designing a system based on a computer chip that can reprogram itself to process images more efficiently than standard chips. Some chips, called application-specific integrated circuits (ASICs), are designed to perform only one kind of function but to do it with exceptional speed. Other chips, such as Intel’s Pentium microprocessor, are designed to handle a wide range of applications but perform them more slowly.

*

The chip that John Villasenor and William Mangione-Smith are working on combines the best aspects of the two chips. The hardware is designed for only one task--in this case, to recognize a particular shape, such as a tank in a satellite image of a battlefield. But within a matter of milliseconds, the logic gates on the chip can be reconfigured to look for a different shape.

“If you’ve got 500 [different shapes] on 500 separate ASIC chips, then it takes up more space and costs you more power,” Villasenor said. But the reprogrammable chip can churn through calculations faster than a microprocessor, he said.

Advertisement

Villasenor and Mangione-Smith have demonstrated their chip on a small scale, and they are currently building a full-scale system for Sandia National Laboratory.

Whitted, the Siggraph organizer, said that despite the plethora of computer graphics technologies to be showcased at this week’s conference, even more dramatic applications are yet to come. Some will make it possible to communicate with a computer by voice and hand motions instead of a mouse and keyboard. Others will make virtual reality videoconferencing possible without clunky hardware such as goggles and gloves.

“With computer graphics,” he said, “you can create a world that doesn’t obey the old rules.”

Karen Kaplan covers technology, telecommunications and aerospace. She can be reached at karen.kaplan@latimes.com

Advertisement