Advertisement

Biden wants to move fast on AI safeguards and signs an executive order to address his concerns

President Biden speaks at a lectern
President Biden on Monday signed a sweeping executive order to guide the development of artificial intelligence, an issue he’s sought to address with urgency.
(Manuel Balce Ceneta / Associated Press)
Share

President Biden on Monday signed an ambitious executive order on artificial intelligence that seeks to balance the needs of cutting-edge technology companies with national security and consumer rights, creating an early set of guardrails that could be fortified by legislation and global agreements.

Before signing the order, Biden said AI is driving change at “warp speed” and carries tremendous potential as well as perils.

“AI is all around us,” Biden said. “To realize the promise of AI and avoid the risk, we need to govern this technology.”

Advertisement

The order is an initial step that is meant to ensure that AI is trustworthy and helpful, rather than deceptive and destructive. The order — which will probably need to be augmented by congressional action — seeks to steer how AI is developed so that companies can profit without putting public safety in jeopardy.

The Biden administration aims to tackle the potential dangers of artificial intelligence, including misinformation, job losses, discrimination and privacy violations.

June 20, 2023

Using the Defense Production Act, the order will require leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release.

The Commerce Department is to issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software. The order also touches on matters of privacy, civil rights, consumer protections, scientific research and worker rights.

White House Chief of Staff Jeff Zients recalled Biden giving his staff a directive to move with urgency on the issue, having considered the technology a top priority.

“We can’t move at a normal government pace,” Zients said the Democratic president told him. “We have to move as fast, if not faster than the technology itself.”

In Biden’s view, the government was late to address the risks of social media and now U.S. youth are grappling with related mental health issues. AI has the positive ability to accelerate cancer research, model the effects of climate change, boost economic output and improve government services among other benefits. But it could also warp basic notions of truth with false images, deepen racial and social inequalities and provide a tool to scammers and criminals.

With the European Union nearing final passage of a sweeping law to rein in AI harms and Congress still in the early stages of debating safeguards, the Biden administration is “stepping up to use the levers it can control,” said digital rights advocate Alexandra Reeve Givens, president of the Center for Democracy & Technology. “That’s issuing guidance and standards to shape private-sector behavior and leading by example in the federal government’s own use of AI.”

Advertisement

The order builds on voluntary commitments already made by technology companies. It’s part of a broader strategy that administration officials say also includes congressional legislation and international diplomacy, a sign of the disruptions already caused by the introduction of new AI tools such as ChatGPT that can generate new text, images and sounds.

The guidance within the order is to be implemented and fulfilled over the range of 90 days to 365 days.

Last Thursday, Biden gathered his aides in the Oval Office to review and finalize the executive order, a 30-minute meeting that stretched to 70 minutes, despite other pressing matters, including the mass shooting in Maine, the Israel-Hamas war and the selection of a new House speaker.

In a frightening use of deepfake technology, scammers are using AI-powered audio and video to pass themselves off as their targets’ relatives or loved ones in real time.

May 11, 2023

Biden was profoundly curious about the technology in the months of meetings that led up to the final order. His science advisory council focused on AI at two meetings, and his Cabinet discussed it at two meetings. The president also pressed tech executives and civil society advocates about the technology’s capabilities at multiple gatherings.

“He was as impressed and alarmed as anyone,” White House Deputy Chief of Staff Bruce Reed said in an interview. “He saw fake AI images of himself, of his dog. He saw how it can make bad poetry. And he’s seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation.”

The issue of AI was seemingly inescapable for Biden. At Camp David one weekend, he relaxed by watching the Tom Cruise film “Mission: Impossible — Dead Reckoning Part One.” The film’s villain is a sentient and rogue AI known as “the Entity” that sinks a submarine and kills its crew in the movie’s opening minutes.

Advertisement

“If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” said Reed, who watched the film with the president.

A discussion draft of a policy proposed by Sens. Chris Coons, Marsha Blackburn and others would, if made into law, offer a legal recourse for people “cloned” by artificial intelligence software without their consent.

Oct. 12, 2023

Governments around the world have raced to establish protections, some of them tougher than Biden’s directives. After more than two years of deliberation, the EU is putting the final touches on a comprehensive set of regulations that target the riskiest applications with the tightest restrictions. China, a key AI rival to the U.S., has also set some rules.

British Prime Minister Rishi Sunak also hopes to carve out a prominent role for Britain as an AI safety hub at a summit this week that Vice President Kamala Harris plans to attend. And on Monday, officials from the Group of Seven major industrial nations agreed to a set of AI safety principles and a voluntary code of conduct for AI developers.

The U.S., particularly the West Coast, is home to many of the leading developers of cutting-edge AI technology, including tech giants Google, Meta and Microsoft and AI-focused startups such as OpenAI, maker of ChatGPT. The White House took advantage of that industry weight this year when it secured commitments from those companies to implement safety mechanisms as they build new AI models.

Meta’s LLaMA chatbot was trained on a database of 200,000 books, including mine, without pay. How should authors like me think about that?

Oct. 5, 2023

But the White House also faced significant pressure from Democratic allies, including labor and civil rights groups, to make sure its policies reflected their concerns about AI’s real-world harms.

Suresh Venkatasubramanian, a former Biden administration official who helped craft principles for approaching AI, said one of the biggest challenges within the federal government has been what to do about law enforcement’s use of AI tools, including at U.S. borders.

“These are all places where we know that the use of automation is very problematic, with facial recognition, drone technology,” Venkatasubramanian said. Facial recognition technology has been shown to perform unevenly across racial groups, and has been tied to mistaken arrests.

Advertisement

While the EU’s forthcoming AI law is set to ban real-time facial recognition in public, Biden’s order appears to simply ask for federal agencies to review how they’re using AI in the criminal justice system, falling short of the stronger language sought by some activists.

The American Civil Liberties Union is among the groups that met with the White House to try to ensure that “we’re holding the tech industry and tech billionaires accountable” so that algorithmic tools “work for all of us and not just a few,” said ReNika Moore, director of the ACLU’s racial justice program, who attended Monday’s signing.

Advertisement