California Senate passes bill that aims to make AI chatbots safer
- Share via
California lawmakers on Tuesday moved one step closer to placing more guardrails around artificial intelligence-powered chatbots.
The Senate passed a bill that aims to make chatbots used for companionship safer after parents raised concerns that virtual characters harmed their childrens’ mental health.
An artificial intelligence startup is under fire for allegedly releasing chatbots that harmed the mental health of young people.
The legislation, which now heads to the California State Assembly, shows how state lawmakers are tackling safety concerns surrounding AI as tech companies release more AI-powered tools.
“The country is watching again for California to lead,” said Sen. Steve Padilla (D-Chula Vista), one of the lawmakers who introduced the bill, on the Senate floor.
At the same time, lawmakers are trying to balance concerns that they could be hindering innovation. Groups opposed to the bill such as the Electronic Frontier Foundation say the legislation is too broad and would run into free speech issues, according to a Senate floor analysis of the bill.
Under Senate Bill 243, operators of companion chatbot platforms would remind users at least every three hours that the virtual characters aren’t human. They would also disclose that companion chatbots might not be suitable for some minors.
Platforms would also need to take other steps such as implementing a protocol for addressing suicidal ideation, suicide or self-harm expressed by users. That includes showing users suicide prevention resources.
Suicide prevention and crisis counseling resources
If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.
The operator of these platforms would also report the number of times a companion chatbot brought up suicide ideation or actions with a user, along with other requirements.
Dr. Akilah Weber Pierson, one of the bill’s co-authors, said she supports innovation but it also must come with “ethical responsibility.” Chatbots, the senator said, are engineered to hold people’s attention including children.
“When a child begins to prefer interacting with AI over real human relationships, that is very concerning,” said Sen. Weber Pierson (D-La Mesa).
The bill defines companion chatbots as AI systems capable of meeting the social needs of users. It excludes chatbots that businesses use for customer service.
The legislation garnered support from parents who lost their children after they started chatting with chatbots. One of those parents is Megan Garcia, a Florida mom who sued Google and Character.AI after her son Sewell Setzer III died by suicide last year.
In the lawsuit, she alleges the platform’s chatbots harmed her son’s mental health and failed to notify her or offer help when he expressed suicidal thoughts to these virtual characters.
Character.AI, based in Menlo Park, Calif., is a platform where people can create and interact with digital characters that mimic real and fictional people. The company has said that it takes teen safety seriously and rolled out a feature that gives parents more information about the amount of time their children are spending with chatbots on the platform.
Character.AI asked a federal court to dismiss the lawsuit, but a federal judge in May allowed the case to proceed.
More to Read
Inside the business of entertainment
The Wide Shot brings you news, analysis and insights on everything from streaming wars to production — and what it all means for the future.
You may occasionally receive promotional content from the Los Angeles Times.