An idea for making self-driving cars safer: Reprogram humans
You’re crossing the street wrong.
That is essentially the argument some self-driving car boosters have fallen back on in the months after the first pedestrian death attributed to an autonomous vehicle and amid growing concerns that artificial intelligence capable of real-world driving is further away than many predicted just a few years ago.
For the record:
9:35 a.m. Aug. 21, 2018An earlier version of this story misspelled Drive.AI co-founder Carol Reiley’s last name as Riley.
In a line reminiscent of Steve Jobs’ famous defense of the iPhone 4’s flawed antennae — “Don’t hold it like that” — these technologists say the problem isn’t that self-driving cars don’t work, it’s that people act unpredictably.
“What we tell people is, ‘Please be lawful and please be considerate,’” said Andrew Ng, a well-known machine learning researcher who runs a venture fund that invests in AI-enabled companies, including self-driving start-up Drive.AI. In other words: no jaywalking.
Whether self-driving cars can correctly identify and avoid pedestrians crossing streets has become a burning issue since March after an Uber self-driving car killed a woman in Arizona who was walking a bicycle across the street at night outside a designated crosswalk. The incident is still under investigation, but a preliminary report from federal safety regulators said the car’s sensors had detected the woman but its decision-making software discounted the sensor data, concluding it was probably a false positive.
Google spinoff Waymo has promised to launch a self-driving taxi service, starting in Phoenix, Ariz., this year, and General Motors Co. has pledged to start a rival service — using a car without steering wheel or pedals — sometime in 2019. But it’s unclear if either will be capable of operating outside of designated areas or without a safety driver who can take over in an emergency. Meanwhile, other initiatives are losing steam. Elon Musk has shelved plans for an autonomous Tesla vehicle to drive across the United States. Uber has axed a self-driving truck program to focus on autonomous cars. Daimler Trucks, part of German automaker Daimler, now says commercial driverless trucks will take at least five years. Others, including Musk, had previously predicted such vehicles would be road-ready by 2020.
With these timelines slipping, driverless proponents such as Ng say there’s one surefire shortcut to getting self-driving cars on the streets sooner: persuade pedestrians to behave less erratically. If they use crosswalks, where there are contextual clues — pavement markings and stoplights — the software is more likely to identify them.
But to others the very fact that Ng is suggesting such a thing is a sign that today’s technology simply can’t deliver self-driving cars as originally envisioned. “The AI we would really need hasn’t yet arrived,” said Gary Marcus, a New York University professor of psychology who researches both human and artificial intelligence. He said that Ng is “just redefining the goal posts to make the job easier,” and that if the only way we can achieve safe self-driving cars is to completely segregate them from human drivers and pedestrians, we already have such technology: trains.
Rodney Brooks, a well-known robotics researcher and professor emeritus at MIT, wrote in a blog post critical of Ng’s sentiments that “the great promise of self-driving cars has been that they will eliminate traffic deaths. Now [Ng] is saying that they will eliminate traffic deaths as long as all humans are trained to change their behavior? What just happened?”
Ng argues that humans have always modified their behavior in response to new technology, especially modes of transportation. “If you look at the emergence of railroads, for the most part people have learned not to stand in front of a train on the tracks,” he said. Ng also notes that people have learned that school buses are likely to make frequent stops and that when they do, small children may dart across the road in front of the bus, so they drive more cautiously. Self-driving cars, he said, are no different.
In fact, jaywalking became a crime in most of the United States only because automobile manufacturers lobbied intensively for it in the early 1920s, in large measure to head off strict speed limits and other regulations that might have affected car sales, said Peter Norton, a history professor at the University of Virginia who wrote a book on the topic. So there is a precedent for regulating pedestrian behavior to make way for new technology.
Although Ng may be the most prominent self-driving proponent calling for training humans as well as vehicles, he’s not alone. “There should be proper education programs to make people familiar with these vehicles, the ways to interact with them and to use them,” said Shuchisnigdha Deb, a researcher at Mississippi State University’s Center for Advanced Vehicular Systems. The U.S. Transportation Department has emphasized the need for such consumer education in its latest guidance on autonomous vehicles.
Maya Pindeus, co-founder and chief executive of Humanising Autonomy, a London start-up working on models of pedestrian behavior and gestures that self-driving car companies can use, likens such lessons to public awareness campaigns Germany and Austria instituted in the 1960s after a spate of jaywalking fatalities. Such efforts helped reduce pedestrian road fatalities in Germany from more than 6,000 deaths in 1970 to fewer than 500 in 2016, the last year for which figures are available.
The industry is understandably keen not to be seen offloading the burden onto pedestrians. Uber and Waymo both said in emailed statements that their goal is to develop self-driving cars that can handle the world as it is, without depending on changing human behavior.
One challenge for these and other companies is that driverless cars are such a novelty right now, pedestrians don’t always act the way they do around regular vehicles. Some people just can’t suppress the urge to test the technology’s artificial reflexes. Waymo, which is owned by Alphabet Inc., routinely encounters pedestrians who try to “prank” its cars, repeatedly stepping in front of them, moving away and then stepping back in front of them, to impede their progress.
The assumption seems to be that driverless cars are designed to be extra cautious so the practical joke is worth the risk. “Although our systems do have super-human perception, sometimes people seem to think Newton’s laws no longer apply,” said Paul Newman, the co-founder of Oxbotica, a British start-up making autonomous driving software.
Over time, driverless cars will become less fascinating, and people will presumably be less likely to prank them. In the meantime, the industry is debating what steps companies should take to make humans aware of the cars and their intentions.
Drive.AI, which was co-founded by Ng’s wife, Carol Reiley, has made a number of modifications to the self-driving vehicles it’s road testing in Frisco, Texas. They’re painted a distinctive bright orange, increasing the chance that people will notice them and recognize them as self-driving. Drive.AI also pioneered the use of an external LED-display screen, similar to the ones many city buses use to display their destination or route number, that can convey the vehicle’s intentions to humans. For instance, a vehicle stopped at a crosswalk might display the message: “Waiting for you to cross.”
Uber has taken this idea further, filing patents for a system that would include a variety of flashing external signage and holograms projected in front of the car to communicate with human drivers and pedestrians. Google has filed patents for its own external signage. Oxbotica’s Newman said he likes the idea of such external messaging as well as distinctive sounds — much like the beeping noise large vehicles make when reversing — to help ensure safe interactions between humans and autonomous vehicles.
Deb said her research shows that people want external features and audible communication or warning sounds of some kind. But so far, besides Drive.AI, the cars these companies are using in road tests don’t include such modifications. It’s also not clear how pedestrians or human drivers could communicate their intentions to self-driving vehicles, something Deb said may also be necessary to avoid accidents.
Pindeus’ company wants those building self-driving cars to focus more on understanding the nonverbal cues and hand gestures people use to communicate. The problem with most of the computer vision systems that self-driving cars use, she said, is they simply put a boundary box around an object and apply a label — “parked car,” “bicycle,” “person” — without the ability to analyze anything happening inside that box.
Eventually, better computer vision systems and better AI may solve this problem. Over time, cities will probably remake themselves for an autonomous age with “geofencing” — a fancy term for creating separate zones and designated pickup spots for self-driving cars and taxis. In the meantime, your parents’ advice probably still applies: Don’t jaywalk, and look both ways before crossing the street.
Kahn writes for Bloomberg.