Who’s the driver of that Google car? Feds ready to say it’s the computer
A car’s driver doesn’t necessarily have to be human: The artificial intelligence behind Google Inc.'s self-driving system could count, according to federal highway safety officials.
In a letter posted on the National Highway Traffic Safety Administration’s website, the agency responded to Google’s request for interpretation of several federal safety standards as they apply to the tech giant’s self-driving cars.
As a premise of the interpretation, “NHTSA will interpret ‘driver’ in the context of Google’s described motor vehicle design as referring to the [self-driving system], and not to any of the vehicle occupants,” Chief Counsel Paul Hemmersbaugh said in the letter. “We agree with Google its [self-driving vehicle] will not have a driver in the traditional sense that vehicles have had drivers during the last more than 100 years.”
Analysts said this could make it easier for the Mountain View, Calif., tech giant to roll out driverless cars and could potentially apply to other autonomous vehicle makers as well.
“It does provide a different definition of the driver,” said Thilo Koslowski, vice president and automotive practice leader at research firm Gartner. “It could accelerate significantly legislation in all the different states for autonomous vehicles to be on the roads.”
This is just the latest federal boost for self-driving cars. The Transportation Department said in January that federal guidelines for how such vehicles will operate, as well as a model state policy, would be developed in six months.
Not all government agencies are fully on board. In December, the California Department of Motor Vehicles released draft rules for self-driving cars, including a requirement that the vehicles have a steering wheel and a human driver ready to take control if necessary.
Google’s vehicle design removes conventional controls such as steering wheels and brake pedals because the company believes that giving human occupants access to these operations could be “detrimental to safety because the human occupants could attempt to override the [self-driving system’s] decisions,” the NHTSA letter says.
There are still many obstacles to overcome before driverless cars could make a widespread debut on public roads. In the letter, NHTSA said Google also must certify that self-driving technology meets standards developed for cars with human drivers and that the agency itself must have some way to determine compliance.
Google might be able to prove that certain standards are unnecessary for a particular vehicle design, though the company “has not made such a showing,” the federal safety agency said.
Google spokesman Johnny Luu said the company was considering the letter but had no other comment.
U.S. Transportation Secretary Anthony Foxx called NHTSA’s interpretation significant, though he said other hurdles for automakers remain.
“The burden remains on self-driving car manufacturers to prove that their
vehicles meet rigorous federal safety standards,” Foxx said in a statement.
Santa Monica-based Consumer Watchdog said NHTSA was wrong to say self-driving technology could be interpreted as the car’s driver.
In a statement, the consumer group’s privacy project director referred to the “disengagement” reports submitted to the DMV by major companies testing self-driving cars in California that detailed the number of times a human had to take control of the vehicle.
Automakers such as Google, Nissan, Mercedes-Benz and Delphi Automotive reported how many times human drivers had to grab the wheel during the year that ended in November, some occurring as often as once every one or two miles.
Google’s driverless cars experienced 272 disengagements, or about one incident every 1,244 miles.
“The robot cars simply cannot reliably deal with everyday real traffic situations,” said John Simpson, Consumer Watchdog’s privacy project director. “Without a driver, who do you call when the robots fail?”