Advertisement

A Human Touch for Machines

Share
TIMES STAFF WRITER

If the face is a window into the soul, then Javier Movellan has peered deeply into the human condition.

His research team has studied more than 100,000 faces, analyzing each one for the smallest shifts in facial muscles--a lexicon of emotional expression. A computer scans the faces 30 times a second and then squirrels away the information in a bulging databank.

Pausing to gather his thoughts, Movellan rubs his eyes and contemplates the face of the young woman on his computer screen. She seems cheerful, but her eyes squint slightly--a hint of vexation?

Advertisement

There is no quick way for Movellan to say, but somewhere in the trillions of bits of information stored in his computer, he is convinced, there is an answer.

For the last decade, the UC San Diego psychologist has traveled a quixotic path in search of the next evolutionary leap in computer development: training machines to comprehend the deeply human mystery of what we feel.

Movellan’s devices now can identify hundreds of ways faces show joy, anger, sadness and other emotions. The computers, which operate by recognizing patterns learned from a multitude of images, eventually will be able to detect millions of expressions.

Scanning dozens of points on a face, the devices see everything, including what people may try to hide: an instant of confusion or a fleeting grimace that betrays a cheerful front.

Such computers are the beginnings of a radical movement known as “affective computing.” The goal is to reshape the very notion of machine intelligence.

It finds inspiration in Hal, the eerily alluring supercomputer of “2001: A Space Odyssey,” which transcended mere computation with astute emotional skills and even a sense of duty. Compared with its impassive astronaut companions, Hal seemed the most human figure in the 1968 film.

Advertisement

Affective computing would transform machines from slaves chained to the limits of logic into thoughtful, observant collaborators. Such devices may never replicate human emotional experience.

But if their developers are correct, even modest emotional talents would change machines from data-crunching savants into perceptive actors in human society. At stake are multibillion-dollar markets for electronic tutors, robots, advisors and even psychotherapy assistants.

With other pioneers of this new realm, Movellan, a quiet, 41-year-old Spaniard, is turning the field of artificial intelligence, or AI, upside down.

For decades, computer scientists have pursued the holy grail of AI: a thinking machine. Their efforts have produced devices of astonishing sophistication.

Yet each new generation of technology follows a pattern set by the first digital computer, the “analytical engine” designed by mathematician Charles Babbage in 1833.

Redefining What

It Means to Feel

Classical AI researchers model the mind through the brute force of infinite logical calculations. But they falter at humanity’s fundamental motivations. Romantic love can be as irrational as it is compelling. And every teacher knows the futility of logic for resolving playground disputes, as do diplomats in conflicts between nations.

Advertisement

Movellan is part of a growing network of scientists working to disprove long-held assumptions that computers are, by nature, logical geniuses but emotional dunces.

The ability to interpret markers for emotion--facial expressions, vocal tones and metabolic responses such as blood pressure--may seem like crude first steps.

Yet experts see machine intelligence, unswayed by human frailty and bias, as an eventual advantage. They envision machines that know us better than we know ourselves.

No one can say whether such a goal will be achieved. Some say that without the ability to experience emotions--far beyond today’s technology--perceptive machines would offer simplistic, unreliable readings of human feelings. Others recoil at the prospect, suggesting that if machines perceive, store and catalog people’s emotional responses, they would open a new assault on personal privacy.

But if scientists are right about the potential of today’s research, emotion machines would force a debate that could redefine intelligence, artificial or human, and shed new light on the core of humanness--what it means to feel.

“Modern AI is offering us [a] realization that ... the essence of intelligence is in our capacity to perceive patterns, deal with uncertainty and operate successfully in the natural world,” Movellan said. “Emotional processes may be a form of intelligence more complex and important than we ever imagined.”

Advertisement

Affective computing updates an age-old fascination. In some versions of the ancient Jewish myth, the clay creature Golem gains human desires when a slip of paper inscribed with the name of God is placed in its mouth. Like Pinocchio and Frankenstein’s monster, Golem is a touchstone for the often frightful preoccupation with turning inanimate objects into sentient beings.

The word “robot” (Czech for “forced labor”) was coined in a 1920 stage play in which machines assume the drudgery of factory production, then develop feelings and turn against their makers. Hal in “2001” was programmed with intuition and empathy to keep astronauts company, only to become a murderer.

Scientists don’t foresee machines with Hal’s emotional skills--or, fortunately, its malevolence--soon. But they already have debunked AI orthodoxy considered sacrosanct only five years ago--that logic is the one path to machine intelligence.

It took psychologists and neuroscientists--outside the computer priesthood--to see inherent limits in the mathematical pursuit of intelligence that has dominated computer science.

For Terry Sejnowski, director of the Institute for Neural Computation at UCSD and Movellan’s mentor, the pursuit of emotion machines began a decade ago when he viewed Sexnet, a program designed to distinguish male from female faces that had been stripped of cultural cues such as hair and cosmetics. In a test against people, the computer proved the better judge.

Sejnowski began to imagine computers that see past faces to the emotions behind them. Now he and Movellan are helping to create a digital compendium of human emotion--”a catalog of how people react to the world.”

Advertisement

The basis of that catalog is a coding system developed in the 1970s by UC San Francisco psychologist Paul Ekman, who classified dozens of facial muscle movements into 44 discrete units--phonemes of emotional expression.

These “action units” define the meaning of raised eyebrows and furrowed brows. Experts in Ekman’s method recognize combinations of movements that correspond to dozens of variations on basic expressions--such as joy, surprise, anger, fear, sadness and disgust--interpreted with remarkable consistency across human cultures.

Movellan’s team videotapes subjects who show a range of emotions. The researchers feed the images into a computer, then use pattern-recognition software to train the computer to make Ekman assessments and to generalize from one person to the next.

Smiles and wrinkles are only first steps. Researchers are adding body language, vocal tones, speech recognition and metabolic signals to give computers a richer mix from which to draw conclusions.

Scientists at the Massachusetts Institute of Technology have fashioned earrings to measure blood volume pressure and shoes to monitor the electrical conductivity of the feet--much the way a lie detector works. About 80% of the time, a computer correctly relates such data to emotional states, such as joy and anger.

Perceptive machines soon may assist even top clinicians. The keenest human observer often misses or misinterprets revealing yet ephemeral expressions.

Advertisement

The computer, however, never blinks. Recording a fleeting grimace can solve a standard therapeutic dilemma: deciding what a patient is really feeling, even when the patient is unsure.

And computers are free of the psychological baggage that clouds human perceptions.

Jeffrey Cohn, a University of Pittsburgh psychologist and pioneer in machine perception, said one of his researchers is exploring these techniques to quantify conflict in schizophrenics, whose feelings and expressions are often out of sync.

“Clinicians may sense that something is not quite right but be unable to describe it,” he said. “By creating a tool that can perform these kinds of analyses, we expand the therapist’s repertoire.”

His colleague has found that while normal people raise their eyebrows in surprise or delight, schizophrenics do so randomly. Such insights could lead to early intervention for at-risk patients.

From Helpful

to Obnoxious

Psychology provides inspiration for emotion machines, but their success depends on commercialization. Consider Pod, a concept car from Toyota Motor Corp. and Sony Corp., with features straight out of the sci-fi cartoon “The Jetsons.”

Pod, short for “personalization on demand,” is a cross between a video game and a lie detector. It “attempts to monitor not only driver preferences but the driver’s state of mind,” said Dave Hermance, Toyota’s top environmental engineer.

Advertisement

At the driver’s right, a silver joystick replaces the steering wheel and pedals for complete one-handed control. The question is: Who’s controlling whom?

Switch on the ignition, and the car begins to monitor your heart rate and perspiration through joystick sensors. A computer records your driving habits.

“If over time it notices that your driving is erratic”--rapid acceleration followed by sudden braking or sharp turns--”Pod plays soothing music and blows air in your face, cooling you down from your excited state,” Hermance said.

Hermance allows that some features may cross the line from helpful to obnoxious.

“If you are really driving badly, it pulls over to the curb. If it did that to me, I’d shoot it,” he said.

Pod may not hit the freeways for a while, but the burgeoning robotics market already is emotion-driven. Sony’s dog-bot Aibo--an expensive electronic “pet”--uses lights, sounds and gestures to portray joy and fear in response to praise or scolding.

Such primitive skills gradually will be replaced by accurate perceptions of a broad range of moods and emotions.

Advertisement

The largest commercial effect of emotion machines might be on marketing, experts say--focus groups based not on what people say about a product, but on what they feel.

Skeptics see the potential of perceptive machines. But they view computers that have genuine understanding and the ability to credibly mimic a human response--a likely outcome of today’s work, some experts say--as farfetched if not dangerous.

Critics See Danger

to Personal Privacy

Just as standard computers solve complex equations by chopping them into millions of pieces, emotion machines divide human characteristics--facial gestures, voice tones and sweat--into bits of emotional data to categorize.

But understanding is a far different and more difficult process.

“You don’t get emotions by manipulating 0s and 1s,” said John Searle, a UC Berkeley philosopher known for challenging the intellectual underpinnings of AI. “Simulation of digestion won’t digest pizza.”

Psychiatrists say emotional responses that sometimes cause us to misinterpret others’ intent may paradoxically ensure that machines never equal humanity’s perceptive skills. How we feel about other people suggests how they affect others.

Ronnie Stangler, chairwoman of the American Psychiatric Assn.’s technology committee, said top clinicians realize that “it’s the richness of our history, our personal experience and our relationships that make us ... appreciate the emotional state in the larger context of a person’s life.”

Advertisement

But the ability of perceptive machines to quantify emotions provides a strong incentive for corporations or governments to capture the data.

Stangler said the prospect opens a range of new dangers concerning personal privacy.

“Can you imagine those same credit bureaus that know the size of our mortgage and our credit card debt knowing also how anxious we are?” Stangler said.

Still, actual understanding of emotions may not be required to fundamentally transform our relationships with machines.

The last time humanity was forced to recast its assumptions about technology was during the Industrial Revolution, when machines went from enhancing human abilities to exceeding them--from helper to replacement. The shock provoked generations of social and economic dislocation.

The digital revolution has been less disorienting. Today’s devices are still servants ruled by logic.

Emotion machines could end that implicit social contract between people and machines and create a perplexing new one.

Advertisement

“If robots are visibly sad, bored or angry, humans, starting with children, will react to them as persons,” writes John McCarthy, an AI pioneer at Stanford University and a critic of emotion machines.

“Human society is complicated enough already.”

Advertisement