In San Francisco, Biden says AI has ‘enormous promise’ but comes with risks
President Biden said Tuesday that artificial intelligence has “enormous promise” but that it also comes with risks such as fueling disinformation and job losses — dangers his administration wants to tackle.
Biden, meeting in San Francisco with AI experts, researchers and advocates, said the technology is already driving “change in every part of American life, often in ways we don’t notice.” AI helps people search the internet, find directions — and has the potential to disrupt how people teach and learn.
“In seizing this moment, we need to manage the risks to our society, to our economy and our national security,” Biden said to reporters before the closed-door meeting with AI experts at the Fairmont Hotel.
Pointing to the rise of social media, Biden said people have already seen the harm powerful technology can do without the proper guardrails. Still, he acknowledged he has a lot to learn about AI.
The meeting came as Biden is ramping up efforts to raise money for his 2024 reelection bid, including from tech billionaires. While visiting Silicon Valley on Monday, he attended two fundraisers, including one co-hosted by entrepreneur Reid Hoffman, who has numerous ties to AI businesses.
The venture capitalist was an early investor in Open AI, which built the popular ChatGPT app, and sits on the board of tech companies including Microsoft that are investing heavily in AI.
The experts Biden met with Tuesday included some of Big Tech’s loudest critics. The list includes children’s advocate Jim Steyer, who founded and leads Common Sense Media; Tristan Harris, executive director and co-founder of the Center for Humane Technology; Joy Buolamwini, founder of the Algorithmic Justice League; and Fei-Fei Li, co-director of Stanford’s Human-Centered AI Institute. California Gov. Gavin Newsom also joined Biden at the AI event.
Steyer said that the president was really engaged during the conversation and that he talked about the potential impact of AI on democracy.
“A couple of people refer to it as sort of a moonshot moment,” Steyer said. “You cannot let a small handful of large companies who may or may not be well-meaning drive the future of AI.”
He said he told the president the big winners or losers of AI could be young people, noting that it could amplify mental health problems.
Some of the experts have experience working inside major tech companies. Before coming to Stanford, Li led AI and machine learning efforts at Google Cloud and also sat on Twitter’s board of directors. Li said an important question for Biden to consider is who is developing AI.
“Our message to the president is to invest into the public sector because this will ensure a healthy ecosystem,” she said, pointing to technology’s positive impacts on health, education and the environment.
Biden’s meetings with AI researchers and tech executives underscore how the president is engaging both sides as his campaign tries to attract wealthy donors while his administration examines the risks of the fast-growing technology. While Biden has been critical of tech giants, executives and workers from companies such as Apple, Microsoft, Google and Facebook’s parent company Meta contributed millions of dollars to his 2020 presidential campaign.
The Biden administration has focused on AI’s potential risks. Last year, the administration released a “Blueprint for an AI Bill of Rights,” outlining five principles developers should keep in mind before they release new AI-powered tools. The administration also met with tech executives, announced steps the federal government had taken to address AI risks, and advanced other efforts to “promote responsible American innovation.”
Increasingly concerned about powerful AI systems, regulators say they’re directing resources toward identifying negative effects on consumers and workers.
Tech giants use AI in various products to recommend videos, power virtual assistants and transcribe audio.
While artificial intelligence has been around for decades, the popularity of an AI chatbot known as ChatGPT intensified a race between big tech players like Microsoft, Google and Meta. Launched in 2022 by OpenAI, ChatGPT can answer questions, generate text and complete a variety of tasks.
The rush to advance AI technology has made tech workers, researchers, lawmakers and regulators uneasy about whether new products might be released before they’re safe. In March, Tesla, SpaceX and Twitter Chief Executive Elon Musk, Apple co-founder Steve Wozniak and other technology leaders called for AI labs to pause the training of advanced AI systems, and urged developers to work with policymakers. AI pioneer Geoffrey Hinton, 75, quit his job at Google so he could speak about AI’s risks more openly.
As technology rapidly advances, lawmakers and regulators have struggled to keep up. In California, Newsom has signaled he wants to tread carefully with state-level AI regulation. He said at a Los Angeles conference in May that “the biggest mistake” politicians can make is asserting themselves “without first seeking to understand.”
California lawmakers have floated several ideas, including legislation that would combat algorithmic discrimination, establish an office of artificial intelligence and create a working group to provide a report on AI to the Legislature.
Writers and artists are worried that companies could use AI to replace workers. The use of AI to generate text and art comes with ethical questions, including concerns about plagiarism and copyright infringement. The Writers Guild of America, which remains on strike, proposed rules in March for how Hollywood studios can use AI. Any text generated by AI chatbots, for example, “cannot be considered in determining writing credits” under the proposed rules.
Artists, journalists and screenwriters are leading the fight against employers who would seek to replace them with the products of ChatGPT and other generative AI software.
The potential abuse of AI to spread political propaganda and conspiracy theories, a problem that has plagued social media, is another top concern among disinformation researchers. They fear AI tools that can spit out text and images will make it easier and cheaper for bad actors to spread misleading information.
AI is already being deployed in some mainstream political ads. The Republican National Committee posted an AI-generated video ad depicting a dystopian future that would supposedly become reality if Biden wins reelection.
AI tools have also been used to create fake audio clips of politicians and celebrities making remarks they didn’t actually say. The campaign of GOP presidential candidate and Florida Gov. Ron DeSantis shared a video of what appeared to be AI-generated images of former President Trump hugging Dr. Anthony Fauci — a target of believers of COVID-19 conspiracy theories.
Tech companies are not opposed to putting guardrails around AI. They say they welcome regulation but also want to help to shape it. In May, Microsoft released a 42-page report about governing AI, noting that no company is above the law. The report includes a “blueprint for the public governance of AI” that outlines five points, including the creation of “safety breaks” for AI systems that control the electric grid, water systems and other crucial infrastructure.
That same month, OpenAI CEO Sam Altman testified before Congress and called for AI regulation.
“My worst fear is that we, the technology industry, cause significant harm to the world,” he told lawmakers. “If this technology goes wrong, it can go quite wrong.”
Altman, who has met with world leaders in Europe, Asia, Africa, the Middle East and beyond, also joined scientists and other leaders in signing a one-sentence letter in May that warned AI poses a “risk of extinction” for humanity.
Times staff writer Seema Mehta in Los Angeles contributed to this report.
More to Read
Sign up for Essential California
The most important California stories and recommendations in your inbox every morning.
You may occasionally receive promotional content from the Los Angeles Times.