Advertisement

Computer Developers Aiming to Exterminate the Mouse

Share
TIMES STAFF WRITER

Controlling a computer has been largely defined over the years by the humble keyboard and mouse. Now, researchers are turning their attention to new kinds of controllers, including eye movements, voice commands and even brain waves.

“One of the questions we in the industry struggle with every day is how we can make computers easier to use,” said Tim Bajarin, president of Creative Strategies Inc., a consulting firm in Silicon Valley. “Not everybody is keyboard-centric. You have to look at alternative methods of getting computers into the hands of more people.”

Experts say keyboards and mice force users to adapt to the technology, not the other way around.

Advertisement

Tomorrow’s interfaces, several of which will be shown at the annual Siggraph conference in Los Angeles this week, are more about adapting technology to humans by incorporating body movements articulated in everyday life.

It is a trend toward natural or organic interfaces that, because of past limits in processing power, has remained in the realm of science fiction until very recently.

“We may think the keyboard and mouse are perfectly fine,” said Mk Haley, chairwoman of Siggraph’s Emerging Technologies program and a researcher at Walt Disney Imagineering Research & Development Inc. in Glendale. “The next generation may think, ‘Why do keyboard and mouse when you can just have brain implants?’ ”

Although direct brain-to-machine connection is one of the Holy Grails of the movement toward natural interfaces, researchers also are looking to exploit speech and language recognition, hand gestures, body shifting, eye movements and even emotional reactions as ways for users to interact with technology.

Many of these concepts, however, are unlikely to land in consumers’ hands any time soon for a host of factors, including high cost, limited applications and technical constraints. Indeed, many of them will end up in the Darwinian dustbin of failed attempts that harbors such items as the Nintendo game glove or Microsoft’s Bob, a computer-generated “helper” meant to guide new users through Windows.

In the meantime, learning what works and what doesn’t will require some bizarre experimentation.

Advertisement

Consider one experiment that connects a lamprey eel brain to a wheel contraption. Shining a light on the contraption sends signals to the eel brain, which in turn emits nerve impulses back to the wheel, causing it to rotate toward the light. That research, to be presented at Siggraph by Northwestern University neurobiologist Sandro Mussa-Ivaldi, gives an early glimpse of controlling computers by thoughts.

Today, the technology is used as an experimental tool for disabled people to move objects and, in the case of biological implants, their own limbs.

Brain waves are just one of several avenues for interacting with a computer. A meditation chamber to be demonstrated at Siggraph combines vital signs and brain wave patterns to create a relaxing virtual environment. One possibility is building a system that detects when a person is about to wake up and then turns on the stereo, brews a cup of coffee and draws the blinds.

Our understanding of brain waves remains crude, however. Though numerous efforts are underway to map brain signals with physical responses, the promise of this particular interface is furthest away of all technologies.

Closer to reality are interfaces that rely on the sense of touch, or haptics. Take the sensingChair, developed by researchers at Purdue University. Sitters lean in the chair to control a virtual car in a driving simulation. Leaning forward will cause the car to accelerate. Leaning back will apply brakes. Leaning left will cause the car to turn left. Users can literally drive by the seat of their pants.

Although the chair isn’t suitable for editing a document, it’s more intuitive than a mouse or arrow keys navigating around a three-dimensional virtual environment, said David Ebert, associate professor of engineering at Purdue, who worked with colleague Hong Tan to develop the application for the chair.

Advertisement

Another touch-based device is the haptic pen. Equipped with a sensor that detects molecular-level changes, the pen can be programmed to “feel” the difference between two substances, such as oil and water. In the Siggraph demonstration, the pen lets holders glide through oil, but resists when it hits water. Imagine a scalpel that will cut only cancerous tissue, but stops at healthy tissue.

“Humans have five senses, but touching is very intuitive,” said Takuya Nojima, the University of Tokyo researcher who developed the device. “We are psychologists as much as we are engineers. The field is psychophysics, and it helps us understand how people interact with technology. In our case, we wanted to understand the exact building blocks of sensation. What does it mean to feel something?”

One project that combines the sense of touch, sight and speech is a system that picks up keywords in conversations and pulls pictures from the Internet that relate to those keywords. A dozen or so pictures float on a touch-sensitive screen the size of a window on a train. Touching an image will pull up additional information about the image. As the conversation flows, the program picks up additional keywords and pulls up more images.

“It’s a bit like free association,” said Christa Sommerer, researcher at the Advanced Telecommunication Research Lab in Kyoto, Japan. “This can be useful for people who don’t have much experience with computers. You don’t have to type anything on a keyboard. You just sit there and speak, and your conversation automatically links you to the Internet.”

Tapping into things humans already do is a common thread among designers of the next-generation interfaces. The Enhanced Desk is another example. Users simply move their hands to manipulate virtual objects and images on a “desk” developed by researchers at the University of Tokyo.

Whereas these projects seek to exploit one of five human senses, researchers at the Massachusetts Institute of Technology are exploring a sixth avenue--people’s natural tendency toward social relationships--to build their interface.

Advertisement

Think of an interface that facilitates dialogues between humans and computers. Would that dialogue be more meaningful or entertaining if the computer were able to detect and respond to people’s emotions? That’s the question proposed by MIT researcher Bill Tomlinson, who built a demonstration in which people play the part of a wolf cub in a virtual pack.

“We’re not proposing that people howl at their computers,” Tomlinson said. “We’re building interfaces that can serve the changing needs of people and can build emotional relationships with them. And that certainly isn’t going to happen with a keyboard and mouse.”

Much of the research on display at Siggraph is pretty far out, and some ideas are destined to fail.

“Breaking new ground for the sake of breaking new ground is dangerous,” said Jakob Nielsen, principal of the Nielsen Norman Group in Mountain View, a technology consulting firm. “This not the way you make products for everyday use. A different approach would be to define human problems and design solutions around them.”

For an interface to catch on, it has to be robust, easy to use, cost-effective and useful in as many applications as possible, Nielsen said. That’s why the keyboard and mouse have endured for so long.

“The focus has to start with people’s lives,” Nielsen said, “not cute ideas.”

Advertisement