Advertisement

Tech titans have ‘a very civilized discussion’ with senators on AI, Musk says

Meta CEO Mark Zuckerberg arrives for a closed-door meeting convened by Senate Majority Leader Charles E. Schumer
Meta CEO Mark Zuckerberg arrives for a closed-door meeting convened by Senate Majority Leader Charles E. Schumer (D-N.Y.) convened to discuss how Congress should regulate artificial intelligence.
(Jacquelyn Martin / Associated Press)
Share

Senators heard from prominent technology executives and others in private Wednesday on how to accomplish the potentially impossible task of passing bipartisan legislation within the next year that encourages the rapid development of artificial intelligence and mitigates its biggest risks.


The closed-door forum on Capitol Hill convened by Senate Majority Leader Charles E. Schumer (D-N.Y.) included almost two dozen tech executives, tech advocates, civil rights groups and labor leaders. The guest list featured some of the industry’s biggest names: Meta’s Mark Zuckerberg and X and Tesla’s Elon Musk, as well as former Microsoft Chief Executive Bill Gates. All 100 senators were invited; the public was not.

Schumer said more than 60 senators attended and that there was some broad consensus in building a foundation for bipartisan AI policy. When he asked everyone in the room whether government should have a role in regulating AI, “every single person raised their hands, even though they had diverse views,” he said.

Advertisement

Schumer, who was leading the forum with Sen. Mike Rounds (R-S.D.), will not necessarily take the tech executives’ advice as he works with colleagues to try to ensure some oversight of the burgeoning sector. But he is hoping that they will give senators some realistic direction for meaningful regulation of the tech industry.

Tech leaders outlined their views, with each participant getting three minutes to speak on a topic of their choosing.

If you post images or words to a public-facing platform or website, chances are that information will be scraped by a system crawling the internet for AI firms.

Aug. 16, 2023

Musk and former Google CEO Eric Schmidt raised existential risks posed by AI, Zuckerberg brought up the question of closed vs. “open source” AI models and IBM CEO Arvind Krishna expressed opposition to the licensing approach favored by other companies, according to a person in attendance who spoke on condition of anonymity due to the rules of the closed-door forum.

Schumer said one of the issues discussed was whether there should be a new agency to regulate AI.

“It was a very civilized discussion among some of the smartest people in the world,” Musk told reporters after leaving the meeting. He said there is clearly some strong consensus.

Some senators were critical of the private meeting, arguing that tech executives should testify in public.

Advertisement

Sen. Josh Hawley (R-Mo.) said he would not attend what he said was a “giant cocktail party for big tech.” Hawley has introduced legislation with Sen. Richard Blumenthal (D-Conn.) to require tech companies to seek licenses for high-risk AI systems.

“I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it to the public,” Hawley said.

An Assembly bill would give actors and artists a way to nullify provisions that allow studios to digitally clone their voices, faces and bodies with AI.

Sept. 13, 2023

Congress has a lackluster track record when it comes to regulating technology, and the industry has grown mostly unchecked by government in the last several decades. Lawmakers have lots of proposals but have mostly failed to agree on major legislation to regulate the industry. Powerful tech companies have resisted, and some lawmakers are wary of overregulation.

Many lawmakers point to the failure to pass any legislation surrounding social media, such as for stricter privacy standards.

“We don’t want to do what we did with social media, which is let the techies figure it out, and we’ll fix it later,” Senate Intelligence Committee Chairman Mark R. Warner (D-Va.) said about the AI push.

Schumer’s bipartisan working group — Rounds as well as fellow Sens. Martin Heinrich (D-N.M.) and Todd Young (R-Ind.) — is hoping the rapid growth of AI will create more urgency.

Advertisement

Rounds said before the forum that Congress needs to get ahead of fast-moving AI by making sure it continues to develop “on the positive side” while also taking care of potential issues surrounding data transparency and privacy.

“AI is not going away, and it can do some really good things or it can be a real challenge,” Rounds said.

Sparked by the release of ChatGPT less than a year ago, businesses across many sectors have been clamoring to apply new generative AI tools that can compose passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its potential societal harms and prompted calls for more transparency in how the data behind the new products are collected and used.

ChatGPT and other new AI services benefit from a science fiction-infused marketing frenzy unlike anything in recent memory. There’s more to fear here than killer robots.

March 31, 2023

Some concrete proposals have already been introduced, including legislation by Sen. Amy Klobuchar (D-Minn.) that would require disclaimers for AI-generated election ads with deceptive imagery and sounds. Hawley and Blumenthal’s broader approach would create a government oversight authority with the power to audit certain AI systems for harms before granting a license.

In the United States, major tech companies have expressed support for AI regulations, though they don’t necessarily agree on what that means. Microsoft has endorsed a licensing approach, for instance, while IBM prefers rules that govern the deployment of specific risky uses of AI rather than the technology itself.

“I think it’s important that government plays a role, both on the innovation side and building the right safeguards, and I thought it was a productive discussion,” Google CEO Sundar Pichai said after leaving the forum.

Advertisement

Many members of Congress agree that legislation is needed, but there is little consensus.

“I am involved in this process in large measure to ensure that we act, but we don’t act more boldly or over-broadly than the circumstances require,” Young said. “We should be skeptical of government, which is why I think it’s important that you got Republicans at the table.”

California Gov. Gavin Newsom signaled he wants the state to lead the way when it comes to putting guardrails around AI’s potential risks.

Sept. 6, 2023

Some of those invited to Capitol Hill, including Musk and Sam Altman, CEO of ChatGPT-maker OpenAI, have signaled more dire concerns evoking popular science fiction about the possibility of humanity losing control to advanced AI systems if the right safeguards are not in place.

But for many lawmakers and the people they represent, AI’s effects on employment and a flood of AI-generated misinformation are more immediate concerns.

Rounds said he would like to see the empowerment of new medical technologies that could save lives and allow medical professionals to access more data. That topic is “very personal to me,” said Rounds, whose wife died of cancer two years ago.

Some Republicans have been wary of following the path of the European Union, which signed off in June on the world’s first set of comprehensive rules for artificial intelligence. The EU’s AI Act will govern any product or service that uses an AI system and classify them according to four levels of risk, from minimal to unacceptable.

A group of European corporations has called on EU leaders to rethink the rules, arguing that it could make it harder for companies in the 27-nation bloc to compete with rivals overseas in the use of generative AI.

Advertisement

Associated Press writers Ali Swenson and Kelvin Chan contributed to this report.

Advertisement