New gesture, voice technology may make remote controls obsolete
The remote control has never been much beloved.
If it’s not getting lost or running out of batteries, the device — and its inscrutable buttons — is confusing some family member or acting as a totem in an argument about what to watch.
Wouldn’t it be nice to wave your hand, say a magic word and make the clicker disappear for good?
With a new generation of gesture- and voice-controlled televisions, that’s exactly what may happen.
Viewers can control a new line of TV sets simply by speaking or gesturing at them, eliminating the need for clunky pointing devices and opening up a range of new ways people can use and interact with their televisions.
At a giant booth built by Samsung Electronics for the Consumer Electronics Show last week in Las Vegas, a young woman gave a demonstration of the company’s new line of Smart TV sets, which come with a built-in Web browser as well as online applications such as Netflix, Skype and Facebook.
“Hi TV,” she said, issuing the verbal command for the TV to turn on. “Channel 1034.” The TV switched to a news channel. “Web browser,” she continued, and the Yahoo home page popped up.
Next, the attendant waved her palm at the small camera built into the top of the television, activating its gesture sensor. By moving her hand, she was able to guide a cursor around the on-screen Web page, and to “click” on links and photos by closing her fist.
Tech observers say gesture and voice recognition systems will grow more sophisticated as the computers embedded in smartphones, TVs, tablets and home appliances become more powerful. That will make obsolete the era of mice and remote controls, which began in 1950 with Zenith’s “Lazy Bones,” a remote connected to the TV through a long wire.
With the newer controls, “you can simply use what God or nature gave you: your hands or your body or your voice — and that’s all you need,” said Charles Golvin, an analyst at Forrester Research.
Going remote-less may take people a little while to get used to, but once they do, Golvin said, “most people would say it’s simpler and more natural.”
Gesture control has become increasingly sophisticated in the last year, moving beyond simple commands and toward a broadening world of interactive gaming and TV applications.
Much of the progress has come from Microsoft Corp.’s Xbox, which uses the Kinect motion sensor to let gamers play without a controller, swinging swords by moving their hands, throwing video punches by throwing real ones and generally playing games with their bodies.
In a keynote presentation at last week’s electronics show, Microsoft, which has sold more than 18 million Kinect cameras since the device debuted last year, showed off a novel use of gesture control — something the company called “two way TV.”
After turning on a recorded episode of “Sesame Street,” a Microsoft executive and a tween-age girl named Ainsley watched as the rangy blue puppet Grover trips on a skateboard and spills a cardboard box full of coconuts.
When Grover asks the viewer for help getting the coconuts back in the box, Ainsley mimes picking one up and tossing it toward the screen. A moment later, a coconut flies into view on-screen and Grover deftly catches it.
“Thank you!” he warbles. “Now I have one coconut in the box.”
PrimeSense, the Israeli firm that makes the microchip that powers the Xbox Kinect, showed off other applications of its gesture-sensing technology, including one that could enable online shoppers to virtually try on clothes — an attempt to solve the fitting problem that has long dogged Internet clothing retailers.
At another booth at the Las Vegas trade show, a lithe model stepped in front of a sensor-equipped TV. A moment later, the screen showed a mannequin similar to her body type. The model gently moved her open hand across the screen, selecting different parts of her new outfit — black pants, then a blue tank top with a bird feather unfolding across the torso. As she turned her body in the virtual mirror, the mannequin, dressed in the selected outfit, turned too.
Down the convention hall at the SoftKinetic booth, executives were practicing their golf swings. An application called Guru Training Systems uses a special depth-sensing camera to make a detailed recording every time you swing, replaying it in slow motion along with a breakdown of the mechanics — are you moving your head too much, leaning too far in one direction, swinging too slowly?
The camera works by flashing tiny lights at the subject and recording how quickly the light bounces back. If you’re facing the camera, photons bouncing off your ear will take longer to return than light bouncing off your nose. The method allows the software to build a three-dimensional, moving picture of your swing, which the company says will help athletes improve their motions, whether in golf, tennis, baseball, karate or any other sport in which precise movements are key.
“We’re really just scratching the surface of the technology,” said Eric Krzeslo, SoftKinetic’s chief strategy officer. “We know that we can go much further.”
Like where?
“The next big step will be to get rid of the screen,” Krzeslo said. “A two-dimensional screen is so limiting, it offers very poor feedback.”
Instead, your tennis swing might benefit more if you were ensconced in a virtual reality world where you could look in every direction, hearing sound and feeling vibration as you move. Or perhaps you’d want to be interacting with a robot coach that could stand next to you and guide your movements and posture with robot hands.
But, Krzelso said, “that’s something we still have to invent.”
More to Read
Inside the business of entertainment
The Wide Shot brings you news, analysis and insights on everything from streaming wars to production — and what it all means for the future.
You may occasionally receive promotional content from the Los Angeles Times.