Driverless cars are growing in number, but makers don’t want to reveal how they sometimes fail

Data from lidar, radar, cameras and GPS units are displayed inside a car equipped with a PolySync driverless-technology monitoring system during the AutoMobility LA trade show at the Los Angeles Convention Center in 2016.
Data from lidar, radar, cameras and GPS units are displayed inside a car equipped with a PolySync driverless-technology monitoring system during the AutoMobility LA trade show at the Los Angeles Convention Center in 2016.
(David McNew / Getty Images)

On March 18, a robot-driven Volvo operated by Uber hit and killed a pedestrian in Arizona.

Advocates for automation maintained that the tragedy shouldn’t detract from the likelihood that driverless technology is eliminating human error and making driving safer. But the death, and a fatality five days later that involved Tesla’s Autopilot driver-assist system, were unusual in another way: They were rare instances in which driverless-car companies were forced to share data about how their systems work, in this case with investigators.

A schism is developing in the driverless-car world — but not between fans and foes of robot cars.

Instead, on one side are driverless-car advocates who believe data transparency will lead to safer deployment of driverless vehicles and help alleviate public fears about the strange and disruptive new technology. On the other are some automobile and technology companies that, for good commercial reasons perhaps, prefer to keep their workings cloaked in mystery.


The lack of transparency about the workings of sensors, logic processors, mapping systems and other driverless technology, like the debate over robot-car regulation, could shape public perception of the nascent industry, said Bryant Walker Smith, a law professor at the University of South Carolina.

“Essentially, [the public will be] looking to see whether these companies are trustworthy,” he said.

Something went wrong. These are the things that went wrong. Here’s why they went wrong. Here’s what we’re going to do about it.

— Law professor Bryant Walker Smith, on how robot car companies might address inevitable tragedies.

The stakes are high. Driverless-vehicle technology is expected to roil major segments of the world economy and market forecasters predict several hundred billion dollars a year in revenue for the winners.

Already semi-autonomous technologies such as Tesla Autopilot are operating on public roads, with deployment of driverless ride-hailing services from Waymo (a subsidiary of Google’s parent, Alphabet), Lyft, Cruise Automation and others due this year or next.

The close attention being paid to those tests was driven home Friday, when a Waymo van operating in autonomous mode was struck in a busy Chandler, Ariz., intersection by a car swerving to avoid another driver. Police said it looked like the Waymo vehicle was not at fault, but the accident remains under investigation.


To understand the controversy, and the effect on public safety, it helps to know what data are being collected and how such information might be put to use if it were made more visible.

Most people are familiar with the idea of a “black box,” technically known as an event data recorder. They’ve been used for decades in the airline industry to help investigators evaluate crashes. But similar devices have become common in cars and trucks to record data on speed, steering, braking and the like over the few seconds before, during and after a crash.

The data issue today goes far beyond the black box, however. It now extends to cutting-edge robotic systems that use sophisticated sensors, complex computer chips and advanced software to take over some or all of the driving tasks that a human being would normally perform. The technology companies that create them are taking different approaches to engineering the systems. Industry and government have yet to determine how to use the data they generate after a crash.

In the Uber death, a video recorded by a dashboard camera — turned over to and released by Tempe, Ariz., police — showed the driverless-car system failed to brake for the pedestrian. It left open the question of whether the system sensors might have failed to notice the pedestrian at all.

Uber’s reaction was to apologize, then dip into some of its $15 billion in investment capital to pay the victim’s family in a legal settlement, thus avoiding a public trial.

Uber declined to make a company executive available to discuss data and transparency on the record, as did Waymo, Tesla and Lyft. Other companies — including Zoox, Nutonomy and General Motors, parent of Cruise Automation — agreed to talk.

A dash-cam video image shows Elaine Herzberg, 49, just before she was hit and killed by an Uber test vehicle in Tempe, Ariz.
(Tempe Police Department / Associated Press )

Even driverless-car advocates are growing concerned about the silence from the industry’s major players. Grayson Brulte, a well-known consultant in the driverless industry, worries that recent polls have consistently shown the public is wary about driverless technology, while companies appear reluctant to engage with the public.

“They’re like Rapunzel up in the tower,” he said. “They have to let down their hair and climb down.”

Alain Kornhauser, who heads the driverless-vehicle program at Princeton University, said he believes that robot cars will improve safety, reduce driver stress, add productive time to the day and offer the elderly and disabled more independence. But the technology is far from perfect, he said, and some robot-induced deaths are inevitable.

Rather than wall off the lessons learned in fatalities such as the recent Uber and Tesla incidents, Kornhauser said, the companies should be sharing crash data with one another, with outside researchers and with the general public. And not just black-box data, but driverless-system data as well. That would make driverless cars safer and faster, he said.

“Uber should not gain a safety advantage over everyone else because they were involved in this crash,” Kornhauser said. “All of the video, radar, lidar and logic trails in the seconds leading up to the crash should be released to the public.


“If this reveals some of Uber’s intellectual property, so be it. If they want to protect their intellectual property, they shouldn’t crash on public roads.”

Current policy in some ways confuses the situation further, Kornhauser said. After the Tesla Model X fatality in Mountain View, Calif., on March 23, Tesla Chief Executive Elon Musk defended the Autopilot system and seemed to blame the driver. He also repeated Tesla safety numbers that statistics experts have described as problematic. Musk’s words caused a public spat with Robert Sumwalt, head of the National Transportation Safety Board, who kicked Tesla off the investigative team.

Kornhauser suggested that transportation officials might be better off allowing objective data to be released while banning speculation that might favor a company or other party involved in a crash.

Karl Iagnemma, chief executive of driverless-technology company Nutonomy, said he believes a solution is possible. There is a concern, as in any other industry, that “if you share knowledge with a competitor, you might enable them to move more quickly.” But if the trade-off is a higher level of safety, he said, “I’m fine with that.”

Waymo CEO John Krafcik in March, announcing the company would buy up to 20,000 all-electric Jaguar I-Pace cars for its planned driverless ride-hailing service.
(Mark Lennihan / Associated Press )

Elements of the aviation safety model could be applied to driverless technology, he said. Airlines face far stricter requirements than automobiles on black-box data, and they confidentially share data with one another to improve safety. Eventually, government investigators reach conclusions and some of the data is made public.


“The promise of sharing data is that if data can be shared industrywide there’s a chance that you will not have that same crash happen again,” Iagnemma said. If federal authorities required such data from all industry players, “we would certainly use that information to improve our systems, absolutely.”

The information released to the public need not be highly technical and should avoid being defensive, according to law professor Smith. His suggestion: “Something went wrong. These are the things that went wrong. Here’s why they went wrong. Here’s what we’re going to do about it.”

But there’s no sign of that happening anytime soon, voluntarily or via regulation. Trump administration agencies have not said much about driverless-vehicle policy. Legislation is working its way through the Senate that would allow manufacturers to sell thousands of driverless cars each year to individuals, but the bill barely touches on data transparency. A similar bill quickly passed the House of Representatives last September.

The nonprofit Advocates for Highway and Auto Safety, which represents a broad range of safety advocates, is pushing for changes in the Senate bill, including the creation of a public database that would publicize defects and operational issues with commercial driverless and driver-assist systems — similar to the National Highway Traffic Safety Administration’s site.

In a letter sent recently to Mitch Bainwol, chief executive of the lobby group Alliance of Automotive Manufacturers, the safety group noted that the Senate bill requires only that driverless-car companies “describe” their systems. “As such, manufacturers will continue to submit slick marketing brochures … instead of providing actual data and documentation that will allow the public and NHTSA to accurately evaluate the safety of the technology,” the letter said.

“We are pro technology,” said Cathy Chase, the advocacy group’s president. “We do want to see this technology succeed. We do want to see fewer people being killed and injured.”


But if driverless-vehicle companies retain full control of system safety data, she said, “you have the fox guarding the henhouse.”

Twitter: @russ1mitchell