Driverless cars offer a future with fewer deaths on the roadways. Today, roughly nine out of 10 car crashes are caused by human error; autonomous vehicles, with their sensors, radars and undistractable computer-driven system, should be much safer. That is, they should be much safer eventually.
But they still have some glaring shortcomings, a point that was underlined in tragic fashion this week. On Sunday a self-driving Uber plowed into a pedestrian walking across a road in Tempe, Arizona, killing her. A video of the incident released Wednesday shows that the woman was crossing mid-street in the dark. The car didn’t slow down, according to reports. There was no braking or swerving. There was no attempt by the vehicle or the back-up operator (who had been looking away from the windshield) to avoid crashing into the woman.
This is the kind of situation in which an autonomous car is supposed to perform better than a human driver. The radar and sensors these vehicles rely on are designed to pick up what the human eye may miss in the shadows. That didn’t happen Sunday in Tempe. Federal authorities are investigating the collision.
Uber immediately suspended self-driving car programs in San Francisco, Pittsburgh, Toronto and the Phoenix area. Toyota and NuTonomy, a Boston-based self driving company, announced this week that they would temporarily suspend testing.
That’s the right response. There’s been a race among carmakers and tech companies to see who can get their experimental vehicles on the street and to the market first. There’s also been heavy lobbying on lawmakers to allow the mass deployment of self-driving vehicles. While it was inevitable that a driverless car would eventually be involved in a fatal collision — autonomous vehicles are unlikely to eliminate crashes, just reduce them — it would be irresponsible to speed ahead without taking stock of how this new technology is performing.
There’s a dilemma, of course. Companies need to be able to test their driverless cars on public roads in order to design systems that can respond to real-life situations. Cities and states need those tests as well to understand how to prepare for the arrival of autonomous cars. Transportation safety regulators, as well as manufacturers, have to figure out how to do more real-world, independently verified stress-testing to hone the technology without harming people in the process. If that means slowing the rollout of driverless cars, that's OK.
So far, there’s no comprehensive data on how driverless cars are performing on tests or whether the vehicles are ready for commercial use. There are no federal rules governing the deployment and performance of autonomous technology. There are no standardized tests the cars are required to pass before using public roads. (Safety advocates, for example, have called for vehicles to pass a kind of drivers test to demonstrate that they can identify and respond to cars, pedestrians and cyclists along their path, as well as traffic signs and road markings.) Current policies let the car's manufacturer decide when the vehicle is safe enough for public use.
The Trump Administration intends to continue that laissez-faire approach with voluntary safety assessments. Congress is considering legislation that would allow the industry to put up to 100,000 autonomous vehicles on the road per year before federal regulators develop safety standards for the technology. The proposal would also take away states’ ability to regulate autonomous vehicle systems’ performance.
Sen. Dianne Feinstein (D-CA) and several colleagues have tried to put the brakes on the legislation, arguing that it wouldn’t do enough to ensure that self-driving cars are no more likely to crash than human drivers are, and that they provide no less protection against injuries.