Advertisement

Elon Musk said a Tesla could drive itself across the country by 2018. One just crashed backing out of a garage

Tesla says in the Model S owner’s manual that Summon is a “beta feature” and that the car can’t detect a range of common objects.
(Richard Vogel / Associated Press)
Share via
The Washington Post

When Mangesh Gururaj’s wife left home to pick up their child from math lessons one Sunday this month, she turned on her Tesla Model S and hit “Summon,” a self-parking feature that the electric automaker has promoted as a central step toward driverless cars.

But as the $65,000 sedan reversed itself out of the garage, Gururaj said, the car bashed into a wall, ripping off its front end with a loud crack. He said the damaged Tesla looked like it would have kept driving if his wife hadn’t hit the brakes.

No one was hurt, but Gururaj was rattled: The car had failed disastrously, during the simplest of maneuvers, using one of the most basic features from the self-driving technology he and his family had trusted countless times at higher speeds.

Advertisement

“This is just a crash in the garage. You can fix this. But what if we were summoning and there was a child it didn’t see?” said Gururaj, an IT consultant in North Carolina, who bought the car last year. “I had a lot of trust in Tesla, as a car, but that’s gone.... You’re talking about a big liability, and your life is at stake.”

The crash is an embarrassing mishap for a technology that Tesla chief Elon Musk unveiled in 2016 to great fanfare, saying it would soon enable owners to hit a button and have their cars drive across the country to meet them, recharging along the way.

Advertisement

But the crash also highlights the growing confidence problem facing driver-assistance technology and self-driving cars. The promise of auto-driving, robot-assisted, quasi-magical wonder cars has given way to a more nuanced reality: Cars that also get confused or crash, often with little warning or explanation.

It’s not the first time the Summon feature’s safety and abilities have been called into question. In 2016, a Tesla owner in Utah said his Model S went rogue after he’d parked it, lurching ahead and impaling itself beneath a parked trailer. Tesla said the car’s logs showed the owner was at fault, but it later updated Summon with a new feature that could have prevented the crash.

When asked for details on the Gururaj crash, a Tesla spokesperson pointed only to the car’s owner’s manual, which calls Summon a “beta feature” and says the car can’t detect a range of common objects, including anything lower than the bumper or as narrow as a bicycle.

Advertisement

Driver-assistance systems such as Tesla’s Autopilot have been involved in a tiny fraction of the nation’s car crashes, and the companies developing the technologies say that in the long term they will boost traffic safety and save lives. Scrutiny of the rare crashes, they add, is misguided in a country where more than 40,000 people died on the road last year.

But the causes of the collisions are often a mystery, leaving drivers such as Gururaj deeply unnerved by the possibility they could happen again. Companies enforce restricted access to the cars’ internal computer logs and typically reveal little about what went wrong, saying information on how cars’ sensors and computers interact is proprietary and should be kept secret in a competitive industry.

That uncertainty has contributed to apprehension among drivers about a technology not yet proved for public use. Two public surveys released in July, by the Brookings Institution think tank and the nonprofit Advocates for Highway and Auto Safety, found more than 60% of surveyed Americans said they were unwilling to ride in a self-driving car and were concerned about sharing the road with them.

Tesla says car owners must continually monitor their vehicle’s movement and surroundings and be prepared to stop at any time. But at the same time, Tesla pitches its self-driving technology as more capable than human drivers: Tesla’s website promises “full self-driving hardware on all cars,” saying they operate “at a safety level substantially greater than that of a human driver.”

Cathy Chase, president of the Advocates for Highway and Auto Safety, said Tesla’s strategy of beta-testing technologies with normal drivers on public roads is “incredibly dangerous.”

“People get lulled into a false sense of security” about how safe or capable the cars are, Chase said. “The Tesla approach is risky at best and deadly at worst.”

Advertisement

Tesla’s Autopilot has been involved in high-profile crashes. In 2016, a Tesla owner in Florida was killed when his Model S, driving on Autopilot, smashed into a tractor-trailer crossing ahead of him on the highway. The car did not slow down or stop to prevent the crash, but federal traffic safety investigators did not cite the company for any safety defects, saying Autopilot needed a driver’s “continual and full attention.”

In California this year, Tesla vehicles have smashed into the backs of a police cruiser and a parked fire truck while driving on Autopilot. The National Transportation Safety Board is investigating another Autopilot crash in March, during which a California driver was killed after his Model X automatically accelerated up to 70 mph in the last three seconds before smashing into a highway barrier.

Tesla has blamed some of the past Autopilot crashes on human error, suggesting the people in the driver’s seat had inadvertently hit the pedal or were not paying attention. The company has also designed the cars to repeatedly warn drivers to stay alert, flashing notifications when, for instance, the driver’s hands can’t be sensed on the wheel.

Gururaj said Tesla remotely pulled computer logs from the car to investigate the crash at his home garage. But the company told him it would not share any information about what happened, adding in an email, “You are responsible for the operation of your vehicle even during summon mode.”

Gururaj’s family, he said, had used Summon hundreds of times over the last year. “We thought this was the coolest feature,” he said. But he said he will stop using the features for fear of them malfunctioning while driving. He also said he was unnerved by Tesla’s response of questioning why the human didn’t intervene quickly enough, rather than why the car drove itself into a wall in the first place.

“They want us to rely on the technology because its response time is faster than humans’. This is the whole concept of automation,” he said. “For them to completely say it’s up to the customer to stop it, that’s really concerning. If the car can’t sense something on the front or on the side, then they shouldn’t put that as a feature. You’re putting your life at stake.”

Advertisement

Harwell writes for the Washington Post.

Advertisement