Advertisement

Elon Musk and AI experts urge U.N. to ban artificial intelligence in weapons

Share via

Tesla and SpaceX Chief Executive Elon Musk has joined dozens of CEOs of artificial intelligence companies in signing an open letter urging the United Nations to ban the use of AI in weapons before the technology gets out of hand.

The letter was published Monday — the same day the U.N.’s Group of Governmental Experts on Lethal Autonomous Weapons Systems was to discuss ways to protect civilians from the misuse of automated weapons. That meeting, however, has been postponed until November.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” read the letter, which was also signed by the chief executives of companies such as Cafe X Technologies (which built the autonomous barista) and PlusOne Robotics (whose robots automate manual labor). “Once this Pandora’s box is opened, it will be hard to close. Therefore we implore the High Contracting Parties to find a way to protect us all from these dangers.”

Advertisement

The letter’s sentiments echo those in another open letter that Musk — along with more than 3,000 AI and robotics researchers, plus others such as physicist Stephen Hawking and Apple co-founder Steve Wozniak — signed nearly two years ago. In the 2015 letter, the signatories warned of the dangers of artificial intelligence in weapons, which could be used in “assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”

Many nations are already familiar with drone warfare, in which human-piloted drones are deployed in lieu of putting soldiers on site. Lower costs, as well as the fact that they don’t risk the lives of military personnel, have contributed to their rising popularity. Future capabilities for unmanned aerial vehicles could include autonomous takeoffs and landings, while underwater drones could eventually roam the seas for weeks or months to collect data to send back to human crews on land or on ships.

Automated weapons would take things a step further, removing human intervention entirely, and potentially improving efficiency. But they could also open a whole new can of worms, according to the 2015 letter, “lowering the threshold for going to battle” and creating a global arms race in which lethal technology can be mass-produced, deployed, hacked and misused.

Advertisement

For example, the letter says, there could be armed quadcopters that search for and eliminate people who meet pre-defined criteria.

“Artificial intelligence technology has reached a point where the deployment of such systems is — practically, if not legally — feasible within years, not decades, and the stakes are high,” the 2015 letter read. “It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”

Philip Finnegan, director of corporate analysis at the Teal Group, said there has been “no appetite” in the U.S. military for removing the human decision-maker from the equation and allowing robots to target foes autonomously.

Advertisement

“The U.S. military has stressed it’s not interested,” he said.

Musk has long been wary of the proliferation of artificial intelligence, warning of its potential dangers as far back as 2014 when he drew a comparison between the future of AI and the film “The Terminator.” Musk is also a sponsor of OpenAI, a nonprofit he co-founded with entrepreneurs such as Peter Thiel and Reid Hoffman to research and build “safe” artificial intelligence, whose benefits are “as widely and evenly distributed as possible.”

Earlier this year, Musk unveiled details about his new venture Neuralink, a California company that plans to develop a device that can be implanted into the brain and help people who have certain brain injuries, such as strokes. The device would enable a person’s brain to connect wirelessly with the cloud, as well as with computers and with other brains that have the implant.

The end goal of the device, Musk said, is to fight potentially dangerous applications of artificial intelligence.

“We’re going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something — or eventually figuring out some way to be symbiotic and merge with AI,” Musk said in a story on the website Wait But Why.

Musk’s views of the risks of artificial intelligence have clashed with those of Facebook’s Mark Zuckerberg as well as others researching artificial intelligence. Last month, Zuckerberg called Musk’s warnings overblown and described himself as “optimistic.”

Musk shot back by saying Zuckerberg’s understanding of the subject was “limited.”

Times staff writer Samantha Masunaga contributed to this report.

Advertisement

tracey.lien@latimes.com

Twitter: @traceylien


UPDATES:

2:20 p.m.: This article was updated to include comment from an analyst.

Noon: This article was updated to include information about Neuralink.

This article was originally published at 9:45 a.m.

Advertisement