A Florida mom is suing artificial intelligence company Character.AI and Google, claiming that the Character AI. Chatbot allegedly encouraged her son to take his own life.
Megan Garcia’s 14-year-old son Sewell Setzer III began using Character. AI’s chatbot Dany in April last year, according to the lawsuit, which claims that after his final conversation with the chatbot on February 28, Setzer died from a self-inflicted gunshot wound to the head.
The lawsuit, which was filed Tuesday in the U.S. District Court in Orlando accuses Character.AI of initiating abusive and sexual interactions with her teenage son, as well as negligence, wrongful death, and survivorship.
“I didn’t know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotion and human sentiment,” Garcia said in an interview with “CBS Mornings.”
She said she thought her son, who she described as an honor student and athlete, was talking to his friends online but became increasingly concerned when he withdrew socially and stopped playing sports.
“I became concerned when we would go on vacation and he didn’t want to do things that he loved, like fishing and hiking,” Garcia said. “Those things to me, because I know my child, were particularly concerning to me.”
Following his death, Garcia found that her son had been communicating with multiple bots but developed a virtual romantic and sexual relationship with one in particular.
“It’s words. It’s like you’re having a sexting conversation back and forth, except it’s with an AI bot, but the AI bot is very human-like. It’s responding just like a person would,” she said. “In a child’s mind, that is just like a conversation that they’re having with another child or with a person.”
In one of his final messages to the chatbot, Setzer said he was scared, wanted her affection, and missed the bot, who responded by telling him, “I miss you too. Please come home to me.”
He followed up, “What if I told you I could come home right now?” and her response was, “Please do my sweet king.”
Setzer has two younger siblings. They were all at home at the time of his passing, and Garcia said her five-year-old son saw the aftermath.
“He thought by ending his life here, he would be able to go into a virtual reality or ‘her world’ as he calls it, her reality if he left his reality with his family here,” she said. “When the gunshot went off, I ran to the bathroom … I held him as my husband tried to get help.”
A spokesperson to Character AI. said in a statement, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously.”
Jerry Ruoti, head of trust and safety at Character AI, said that the firm, which is in a non-exclusive licensing agreement with the firm that allows access to its machine-learning technology, has protections regarding self-harm and suicidal behavior.
“Today, the user experience is the same for any age, but we will be launching more stringent safety features targeted for minors imminently,” Ruoti said.
“Our investigation confirmed that, in a number of instances, the user rewrote the responses of the Character to make them explicit. In short, the most sexually graphic responses were not originated by the Character, and were instead written by the user.”