Advertisement

Column: Self-driving car deaths raise the question: Is society ready for us to take our hands off the wheel?

Share via

The self-driving car era, such as it is, has not gotten off to an auspicious start.

Not long ago, technology and automobile companies were talking as though the time was just around the corner when human drivers would be sharing the road with cars and trucks barreling along the highways under machine control, while their occupants read, worked or snoozed. Transportation companies were agitating for government regulators to take the restraints off experimental vehicles for tests on crowded urban streets and busy freeways.

But on March 18, an Arizona woman was killed by an Uber self-driving apparently not under the control of a human in the cabin. Five days later, the occupant of a top-of-the-line Tesla Model X with its semiautonomous system engaged died after the vehicle hit a highway barrier in Northern California, was hit by two other cars, and caught fire. Those incidents revived memories of the 2016 crash when the occupant of a Tesla running on its Autopilot system in Florida was killed in a collision with a truck it apparently had not spotted.

In a lot of cases the courts have struggled to figure out is this a machine, or is this like a person?

— Ryan Calo, University of Washington.

Advertisement

After California imposed rules governing autonomous vehicle tests, Arizona welcomed Uber’s fleet of self-driving cars with open arms. Following the latest deaths, Arizona Gov. Doug Ducey suspended the program. Uber also announced it would not renew its California testing permit.

These actions dodge a fundamental question: Are our social and legal structures ready to deal with self-driving cars? Or to put it another way — are we ready?

To Ryan Calo, an expert in robotics and cyberlaw at the University of Washington Law School, many aspects of these questions are easy to answer, and the answer is yes.

Advertisement

“People underappreciate how many cases there have been specifically about robots, and plenty of cases involving cars and car safety features and shared responsibility for car accidents,” Calo told me. “So is there enough previous case law for courts to analogize to this situation? I think so.”

Most encounters between a self-driving car and humans that result in injuries or death fall into categories perfectly familiar to the legal system, he observes.

“If there’s some question about who was in control of the car or some question about whether it was the software or the sensors or the driver had some obligation to be mindful, these kinds of things have come up time and again, and the courts sort them out. In many ways, there are not genuinely difficult puzzles presented by driverless car liability.”

Advertisement

As Calo points out in an article set to appear in an upcoming issue of Communications of the ACM and adapted by Slate, there’s a difference between the Florida Tesla case and the recent accident in Arizona: In the first case, the driver voluntarily turned over control of his car to Autopilot, and therefore could be held responsible for his own death. In the second, the pedestrian made no such choice.

Existing law knows how to distinguish between these cases, usually by assuming that an injured pedestrian is blameless. All that remains is to apportion liability, which is within the courts’ wheelhouse. In the Arizona, case, as it happens, Uber’s responsibility appeared so clear-cut it settled with the victim’s family in nano-time.

But Calo observes that there will be a category of cases that will present novel questions for judges. He calls these “genuinely unforeseeable categories of harm.” Collisions with pedestrians are foreseeable. But robot cars may do things that are harder to sort out. Calo posits the example of an electric car expected to start each day with a full battery. One night its owners forget to plug it in, and the machine decides on its own to run its gas engine to charge up, suffocating the household with carbon monoxide.

The designers, Calo writes, “understood that a driverless car could get into an accident. They understood it might run out of gas and strand the passenger. But they did not in their wildest nightmares imagine it would kill people through carbon monoxide poisoning.” It’s conceivable that they would not be held liable. As more such “adaptive systems” take their place alongside humans in daily life, Calo writes, “courts will have to reexamine the role forseeability will play as a fundamental arbiter of proximate causation and fairness.”

One imponderable issue is how humans will think of their self-driving cars. Approaches to responsibility and liability may differ depending on whether we think of them as pure machines, as simulated humans in steel skins, or as persons.

As Calo wrote in a 2014 paper, people may be “hardwired to react to anthropomorphic technology such as robots as though they were interacting with a person.” It’s not uncommon for people to name their cars or endow them with fancied personalities, even though they don’t resemble human beings. But think how much easier that might be if cars can do much more without direct human guidance.

Advertisement

How society, law and the courts may treat driverless cars may depend heavily on where the cars fit in the continuum from stretching from dumb machines like toasters to personified pieces of equipment to human-like helpmeets. That will have implications for whether we allow driverless cars to share the road with us, much less take over the highway entirely.

“In a lot of cases the courts have struggled to figure out is this a machine, or is this like a person?” he says. “Insofar as driverless cars feel like they have agency, then yes, we may hold them to a higher standard, and yes, we may struggle to characterize them, just as kids struggle to decide whether a robot is alive or not.”

Keep up to date with Michael Hiltzik. Follow @hiltzikm on Twitter, see his Facebook page, or email michael.hiltzik@latimes.com.

Return to Michael Hiltzik’s blog.

Advertisement