Advertisement

Prominent AI leaders warn of ‘risk of extinction’ from new technology

OpenAI CEO Sam Altman waits to speak before a Senate hearing
OpenAI Chief Executive Sam Altman before a Senate hearing on artificial intelligence on May 16.
(Patrick Semansky / Associated Press)
Share via

As artificial intelligence races toward everyday adoption, experts have come together — again — to express worry over technology’s potential power to harm — or even end — human life.

Two months after Elon Musk and numerous others working in the field signed a letter in March seeking a pause in AI development, another group consisting of hundreds of AI-involved business leaders and academics signed on to a new statement from the Center for AI Safety that serves to “voice concerns about some of advanced AI’s most severe risks.”

The new statement, only a sentence long, is meant to “open up discussion” and highlight the rising level of concern among those most versed in the technology, according to the nonprofit’s website. The full statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Advertisement

ChatGPT and other new AI services benefit from a science fiction-infused marketing frenzy unlike anything in recent memory. There’s more to fear here than killer robots.

March 31, 2023

Notable signatories of the document include Demis Hassabis, chief executive of Google DeepMind, and Sam Altman, Chief Executive of OpenAI.

Though proclamations of impending doom from artificial intelligence are not new, recent developments in generative AI such as the public-facing tool ChatGPT, developed by OpenAI, have infiltrated the public consciousness.

Outdated software, aging infrastructure and other weaknesses leave California’s critical water supply vulnerable to cyberattacks and other threats.

May 8, 2023

The Center for AI Safety divides the risks of AI into eight categories. Among the dangers it foresees are AI-designed chemical weapons, personalized disinformation campaigns, humans becoming completely dependent on machines and synthetic minds evolving past the point where humans can control them.

Advertisement

Geoffrey Hinton, an AI pioneer who signed the new statement, quit Google earlier this year, saying he wanted to be free to speak about his concerns about potential harm from systems like those he helped to design.

“It is hard to see how you can prevent the bad actors from using it for bad things,” he told the New York Times.

The March letter did not include the support of executives from the major AI players and went significantly further than the newer statement in calling for a voluntary six-month pause in development. After the letter was published, Musk was reported to be backing his own ChatGPT competitor, “TruthGPT.”

Advertisement

Tech writer Alex Kantrowitz noted on Twitter that the Center for AI Safety’s funding was opaque, speculating that the media campaign around the danger of AI might be linked to calls from AI executives for more regulation. In the past, social media companies such as Facebook used a similar playbook: ask for regulation, then get a seat at the table when the laws are written.

The Center for AI Safety did not immediately respond to a request for comment on the sources of its funding.

Whether the technology actually poses a major risk is up for debate, Times tech columnist Brian Merchant wrote in March. He argued that, for someone in Altman’s position, “apocalyptic doomsaying about the terrifying power of AI serves your marketing strategy.”

Advertisement