Advertisement

Light at the tunnel’s end

Share
Special to The Times

ELIZABETH GOLDRING, a poet and artist, is nearly blind. Now, after years of hard work, she can view words and faces and wander through a virtual art gallery of her work. An instrument called a “seeing machine,” under development at the Massachusetts Institute of Technology, where she works, is making the point that just because a person is blind doesn’t necessarily mean she can’t see.

The MIT machine, which takes advantage of what’s left of a person’s retina, is only one such seeing device for the blind under development. Other technologies use sound to represent visual information or to otherwise guide blind individuals -- an approach that makes sense with research revealing that sound can be processed by brain regions that normally process vision.

The MIT device was developed after Goldring made a discovery at her ophthalmologist’s 20 years ago. (At the time, she was blind, though now through surgeries she can see a bit out of one eye.) That long-ago day, her doctor used a device called a “scanning laser ophthalmoscope” to shine the word “sun” through her blood-filled eyeball and onto her retina. Because it was so bright, she could see it -- and read it.

Advertisement

The experience sent Goldring on a search for a lower-cost alternative that would allow her to see at least something -- and to Robert Webb, a physicist at Schepens Eye Research Institute in Boston and inventor of the scanning laser ophthalmoscope.

To make a lower-cost machine, Webb and co-workers started with parts from an old video projector and substituted light-emitting diodes for the projector’s bulbs. LEDs are less expensive than lasers and can project images and words onto the retina just as well. Then they devised a version to test in the lab: a computer that supplied, to a projector, either images of words or a virtual building to navigate through.

Sitting at a table, the user peers into the projector’s light source, and moves through the images using a joystick. To find part of the retina that still senses light, a user moves his or her head around until something can be seen.

Webb’s team tested the machine on 10 people who were nearly blind -- and reported in the February issue of the journal Optometry that all of the subjects could see and identify most of the images and words. Seven of the 10 thought that navigating through a virtual world with the machine would help them navigate in real life.

Webb also hooked up a camera to the projector and captured the image of one of Goldring’s friends, which she termed “incredible.” “I could see the expression in his eyes, his mouth,” she says. Without the machine, “even with my good eye, I couldn’t see that.”

The team needs more funding to get the machine out of the lab and into the hands of people who could use it. There could be a lot of them, says co-developer Jerry Cavallerano, an optometrist at Harvard’s Beetham Eye Institute in Boston. Many diseases that cause blindness spare part of the retina, which is needed for this machine to help.

Advertisement

But seeing doesn’t have to involve light. Sound can also convey information about the visual world. Some developers have been working on technologies that rely on echoes to guide people through streets and hallways.

For example, the white mobility canes that some blind people use can be outfitted with a sonar device that projects high-pitched sounds. The quality of the sound that bounces back indicates where objects are, and how big or solid they are.

But this kind of echolocation device gives information only about the location of objects, and some details of an object’s characteristics -- how hard, soft, skinny or tall it is.

Physicist Peter Meijer, who works at Philips Research in Eindhoven, the Netherlands, is developing a gadget in his off time that helps blind people “visualize” things such as photographs or images on computer screens. This would provide a different kind of information than echolocation, which if used to view a computer could indicate merely the square shape of the monitor but not the images displayed upon the screen. Meijer’s technology, called the vOICe, uses sound to convey that graphical information -- for instance, the shape of a wavy line curving across the monitor’s screen.

The sound doesn’t bounce back from the object. Rather, a camera -- which can be hooked up to a pair of sunglasses that the user would wear -- scans the image. That visual information is converted via a computer into swooshes and high and low sounds that the wearer hears through earphones. For example, the pitch of the sound relays the height of a square drawn on paper. And the brighter an object is, the louder the sound: A light-filled window would be heard as louder than the wall around it.

Because the device takes a moment to scan, convert and relay the auditory information, Meijer says, “It could not reliably detect an oncoming car,” but people could use it while walking along the street to examine posters hanging on buildings, for example.

Advertisement

Users of the vOICe have to learn an alphabet of sounds that can be assembled into “soundscapes” that denote visual details, but when they become proficient, longtime users can accurately describe a photograph of a street scene -- trees on the left and a building on the upper right -- based on the tones and beeps and loudness of the sounds.

Meijer says the gear is not for “the faint of heart”: It takes several months of training. But once people learn, their brains literally see the soundscapes.

To demonstrate this, neurologist Dr. Alvaro Pascual-Leone at Harvard University has scanned the brains of two masters of the vOICe system. When they’re given only the sound of a rooster crowing, the auditory parts of their brains light up during functional magnetic resonance imaging. But when they use the vOICe to create a soundscape of a rooster image, the visual region in their brain lights up instead -- even though they’re actually hearing sounds.

“Once they learn the object’s soundscape, they are seeing the object with their mind’s eye,” Pascual-Leone says.

Meijer says it’s impossible for him to track how many people listen to the vOICe, but he thinks the training time discourages more widespread use. People assemble their own systems and download the software that scans and translates visual information into soundscapes from his site. The biggest expense is the cost of a laptop, which can run $2,500 for a “nice setup,” Meijer says. The sunglasses that contain the camera can cost about $500.

Goldring, meanwhile, hopes she can find the funds to make the MIT seeing machine portable and more available. And soon. Devices to help the visually challenged see can be a boon to their well-being, she says. “The technology is there so people who are blind do not need to be isolated from experiences. Seeing, when blind, is like reading a poem. It can be quite beautiful.”

Advertisement

*

(BEGIN TEXT OF INFOBOX)

GPS devices offer a voice in the darkness

Although many visually impaired people use canes outfitted with echolocation devices, this isn’t the only way to use sound to help navigate, or even the most practical, says Jay Leventhal, editor for the bimonthly magazine Access World, which covers assistive technologies for the blind and visually impaired. “Echolocation hasn’t caught on -- people have problems with putting something in their ears,” he says. Instead, devices that use global positioning system (GPS) technology to map out routes plus points of interest such as restaurants or subway stops -- and relay that information to the visually impaired by speaking to them through an earpiece -- are gaining in popularity. But even GPS isn’t perfect, Leventhal says: It has a 30-foot margin of error. “I’d like it to be able to take you right to a door,” he says.

-- Mary Beckman

Advertisement