Advertisement

Ethical dilemma on four wheels: How to decide when your self-driving car should kill you

Share via

Self-driving cars have a lot of learning to do before they can replace the roughly 250 million vehicles on U.S. roads today. They need to know how to navigate when their pre-programmed maps are out of date. They need to know how to visualize the lane dividers on a street that’s covered with snow.

And, if the situation arises, they’ll need to know whether it’s better to mow down a group of pedestrians or spare their lives by steering off the road, killing all passengers onboard.

This isn’t a purely hypothetical question. Once self-driving cars are logging serious miles, they’re sure to find themselves in situations where an accident is unavoidable. At that point, they’ll have to know how to pick the lesser of two evils.

Advertisement

The answer could determine whether self-driving cars become a novelty item for the adventurous few or gain widespread acceptance among the general public.

In other words, the stakes are huge.

Nearly 34,000 people die in car crashes in the U.S. each year, and another 3.9 million people are injured badly enough to go a hospital emergency room, according to the Centers for Disease Control and Prevention. The National Highway Traffic Safety Administration says 93% of traffic accidents can be blamed on human error, and the consulting firm McKinsey & Co. estimates that if humans were taken out of the equation, the savings from averted crashes would add up to about $190 billion a year.

Advertisement

“Us having to drive our own cars is responsible for a tremendous amount of misery in the world,” said University of Oregon psychologist Azim Shariff, who studies the factors that prompt people to make moral decisions.

Shariff teamed up with psychological scientist Jean-François Bonnefon of the Toulouse School of Economics in France and Iyad Rahwan, who studies social aspects of artificial intelligence at the MIT Media Lab, to investigate the ethics of driverless cars in a systematic, data-driven manner.

They began by finding out what would make self-driving cars most palatable to future passengers. In a series of surveys involving nearly 2,000 people, the trio sketched out a series of increasingly fraught scenarios and asked people what they thought the car should do.

Advertisement

The easiest question was whether a self-driving car with a single passenger should crash itself into a wall to avoid hitting a group of 10 pedestrians. About three-quarters of respondents agreed that sacrificing one life to save many more was the moral thing to do.

After that, things started to get tricky. The fewer pedestrians there were to save, the weaker the consensus that the car should sacrifice its passenger. If crashing into a wall would save just one pedestrian, only 23% of those surveyed thought that’s what the car should do.

When the researchers asked people to imagine that they were riding in the car with their child or another relative, their willingness to swerve away from innocent pedestrians faltered. Still, between 54% and 66% of survey takers agreed that the car should do what it must to save as many lives as possible.

This pattern of responses revealed people’s strong underlying preference for a “utilitarian” set of rules designed to maximize lives saved and minimize deaths — with one big exception.

“People want to live in a world in which driverless cars minimize casualties, but they want their own car to protect them at all costs,” Rahwan said.

Another sign of this sentiment was the fact that 50% of survey-takers said they’d be likely to buy a self-driving car that placed the highest value on passenger protection, while only 19% would purchase a model that intended to save the most lives.

Advertisement

Nor were most people in favor of laws requiring self-driving cars to act in a utilitarian manner — up to two-thirds of people opposed this idea. In a separate survey question, 21% of people said they’d buy a self-driving car if such a law were in place, while 59% would buy one if it weren’t.

The results were published online Thursday in the journal Science.

The study is just the beginning of an uncomfortable but necessary discussion about driverless cars, the researchers said.

What if the pedestrians in a car’s path include children, or an elderly person? What if they were jaywalking? What if the passenger in the self-driving vehicle were a surgeon en route to a hospital to perform a life-saving surgery? Does that change the moral calculus? (To see more of the questions — and test your own responses — visit moralmachine.mit.edu.)

With time, people might become more comfortable with these kinds of trade-offs, said Joshua Greene, an experimental psychologist, neuroscientist and philosopher at Harvard.

Advertisement

“Today, cars are beloved personal possessions, and the prospect of being killed by one’s own car may feel like a personal betrayal,” Greene wrote in a commentary that accompanies the study.

In the future, however, self-driving vehicles might have the same emotional resonance as individual subway cars. “As our thinking shifts from personal vehicles to transportation systems, people might prefer systems that maximize overall safety,” Greene wrote.

In the meantime, there’s no denying that some of the ethical problems posed by self-driving cars are new. After all, Bonnefon said, this is a product that “might decide to kill you, even if you do everything right.”

karen.kaplan@latimes.com

Follow me on Twitter @LATkarenkaplan and “like” Los Angeles Times Science & Health on Facebook.

MORE IN SCIENCE

Advertisement

It’s been 20 years since Australia had a mass shooting. How much of the credit goes to gun control?

Newly discovered ‘baby’ planets could unlock mysteries of planetary evolution

Could wear and tear on the ‘love hormone’ gene make us less social?

Advertisement