There’s a new bill that California just proposed to help curb the dangers surrounding AI chatbots and the youth.
The new bill, SB 243, forces all AI firms to routinely remind children about chatbots being AI and not human in nature. This was first proposed by the state’s senator Steve Padilla. He says it’s a great and useful means to safeguard the youth against the addictive and influential components of AI.
Furthermore, he hopes this can restrict companies from making use of engagement patterns that turn addictive. Similarly, the bill would force the companies to roll out yearly reports detailing the number of incidents it detected kids having suicidal thoughts from the chatbots. This includes the number of times the chatbot brought the controversial topic up. As a result, companies would be able to confirm to users if and when the chatbot in question is not appropriate for children.
We saw in 2024 how one parent ended up filing the wrong legal case against Character.AI for the child’s death. They alleged that the customized chatbots are very dangerous for teens and therefore the victim’s constant conversations with the tool led her to take her life in the end.
Another lawsuit was linked to the same company that was accused of releasing harmful content to teenagers. This is why Character.AI went quick to share that it was working on generating parental controls and new AI models that restrict any sensitive outputs.
As per Senator Padilla from California, protecting kids is the government’s right and therefore they cannot let the youth become targets for tech giants to experiment on. This is because it comes at the cost of their health and their lives. For this reason, he urged the necessary actions for common sense to prevail for greater protection against all predatory and addictive features.
The news comes as many states and the federal government try to double down on social media apps and the safety on offer. Hence, we just might see AI chatbots become the next target of discussion. This does make sense as more and more kids get active with technology and the influence of AI does not seem to be dying down anytime soon.
Image: DIW-Aigen
Read next: Meta Looks To Add More Transparency To AI Image Tools To Differential Real from Fake
The new bill, SB 243, forces all AI firms to routinely remind children about chatbots being AI and not human in nature. This was first proposed by the state’s senator Steve Padilla. He says it’s a great and useful means to safeguard the youth against the addictive and influential components of AI.
Furthermore, he hopes this can restrict companies from making use of engagement patterns that turn addictive. Similarly, the bill would force the companies to roll out yearly reports detailing the number of incidents it detected kids having suicidal thoughts from the chatbots. This includes the number of times the chatbot brought the controversial topic up. As a result, companies would be able to confirm to users if and when the chatbot in question is not appropriate for children.
We saw in 2024 how one parent ended up filing the wrong legal case against Character.AI for the child’s death. They alleged that the customized chatbots are very dangerous for teens and therefore the victim’s constant conversations with the tool led her to take her life in the end.
Another lawsuit was linked to the same company that was accused of releasing harmful content to teenagers. This is why Character.AI went quick to share that it was working on generating parental controls and new AI models that restrict any sensitive outputs.
As per Senator Padilla from California, protecting kids is the government’s right and therefore they cannot let the youth become targets for tech giants to experiment on. This is because it comes at the cost of their health and their lives. For this reason, he urged the necessary actions for common sense to prevail for greater protection against all predatory and addictive features.
The news comes as many states and the federal government try to double down on social media apps and the safety on offer. Hence, we just might see AI chatbots become the next target of discussion. This does make sense as more and more kids get active with technology and the influence of AI does not seem to be dying down anytime soon.
Image: DIW-Aigen
Read next: Meta Looks To Add More Transparency To AI Image Tools To Differential Real from Fake