HARRISBURG, Pa. — A local legislator is co-sponsoring a state bill to protect children and other vulnerable people from artificial intelligence chatbots that could exacerbate mental health crises.
State Sen. Nick Miller, D-Lehigh/Northampton, co-sponsored Senate Bill 1090, the Safeguarding Adolescents from Exploitative Chatbots and Harmful AI Technology, or SAFECHAT Act, with state Sen. Tracy Pennycuick, R-Berks/Montgomery, the prime sponsor.
The bill was approved Tuesday by the state Senate Communications and Technology Committee, which Pennycuick chairs.
"We must put safeguards in place to protect [children] from potential harm."State Sen. Nick Miller, D-Lehigh/Northampton
Next, the full Senate would have to consider the bill for it to see movement.
“Artificial intelligence is rapidly becoming part of everyday life," Miller said in a statement.
"But as these tools become more accessible to students and young children, we must put safeguards in place to protect them from potential harm.”
As noted in the proposed legislation, AI systems can "simulate a sustained human or human-like relationship with a user."
Those systems retain information from previous sessions with a user and personalize responses to keep a conversation going.
The proposed SAFECHAT Act is a response to stories of young people, including teens, who turned to AI chatbots for companionship and instead were encouraged to take their own lives, according to a news release from Miller's office.
In recent years, multiple families across the country have filed lawsuits against AI companies, alleging chatbot responses led to a family member’s death by suicide.
If passed, the bipartisan SAFECHAT Act “has the potential to save lives,” Miller, a former Allentown school director, said.
What's in the bill?
The proposed legislation would require AI technology to fulfill certain disclosures and safeguards.
For instance, chatbots must be prevented from “producing suicidal ideation, suicide or self-harm content." They also must be prevented from encouraging a user to act violently toward others.
Additionally, if a user discusses suicide, suicidal thoughts or self-harm, the chatbot would be required to provide a notification with information on how to contact a crisis hotline or text line.
“As these tools become more common...our laws must keep pace to prevent avoidable tragedies."State Sen. Tracy Pennycuick, R-Berks/Montgomery, chair of the Senate Communications and Technology Committee
“As these tools become more common in classrooms, on smartphones and across social platforms, our laws must keep pace to prevent avoidable tragedies,” Pennycuick said in a statement.
The proposed legislation would establish additional safeguards specifically with adolescents and children in mind.
When a chatbot knows or suspects a user is a minor, it would be required to notify the user that it is “not an actual human being” and remind the minor of that fact every three hours.
The chatbot also would have to encourage the minor to take breaks from using the chatbot.
Additionally, the SAFECHAT Act would require AI companies to take measures to prevent chatbots from producing sexually explicit material or telling a minor to engage in such conduct.
The attorney general would be responsible for enforcing the SAFECHAT Act if it becomes law. Civil penalties for violating the act could be up to $10,000 per violation.
If you are having thoughts of suicide or facing other mental health struggles and need someone to talk to, call or text 988 to speak with a counselor through the 988 Suicide and Crisis Lifeline.