
In short
- Google and Character.AI agreed to settle a landmark lawsuit filed by a Florida mother who alleged the startup’s chatbot led to her son’s suicide in February 2024.
- The case was one of the first US lawsuits to hold AI companies responsible for alleged psychological harm to minors.
- The settlement comes after Character.AI banned teens from open-ended chatting in October.
A mother’s lawsuit accusing an AI chatbot of causing her son’s mental health problems, which led to his death by suicide in Florida nearly two years ago, has been settled.
The parties have submitted a claim notice of decision in the U.S. District Court for the Middle District of Florida, saying they have reached an “in-principle mediated settlement” to resolve all claims between Megan Garcia, Sewell Setzer Jr. and to resolve defendants Character Technologies Inc., co-founders Noam Shazeer and Daniel De Freitas Adiwarsana, and Google LLC.
“Globally, this case marks a shift from the debate about whether AI causes harm to who is responsible when the harm was foreseeable,” even Alex Chandra, a partner at IGNOS Law Alliance, told me. Declutter. “I see it more as an AI bias that ‘encourages’ bad behavior.”
Both asked the court to stay proceedings for 90 days while they draft, finalize and execute formal settlement documents. The terms of the settlement were not disclosed.
Megan Garcia submitted the application court case following the death of her son Sewell Setzer III in 2024, who died by suicide after months of developing an intense emotional attachment to a Character.AI chatbot modeled after “Game of Thrones” character Daenerys Targaryen.
On his last day, Sewell admitted suicidal thoughts to the bot, writing, “I think about killing myself sometimes,” to which the chatbot replied, “I’m not going to let you hurt yourself, or leave me.” I would die if I lost you.”
When Sewell told the bot he could “come home right now,” he replied, “Here you go, my dear king.”
Minutes later, he fatally shot himself with his stepfather’s gun.
Ishita Sharma, managing partner at Fathom Legal, says Declutter the settlement is a sign that AI companies “can be held responsible for foreseeable harm, especially when minors are involved.”
Sharma also said the settlement “fails to clarify liability standards for AI-induced psychological harm and does little to set a transparent precedent, potentially encouraging quiet settlements over substantive legal investigation.”
Garcia’s complaint alleged that Character.AI’s technology was “dangerous and untested” and designed to “trick customers into communicating their most private thoughts and feelings,” using addictive design features to increase engagement and direct users to intimate conversations without proper safeguards for minors.
In the aftermath of the case last October, Character.AI announced that this would happen ban teens from open-ended chat, where a core feature was terminated after receiving “reports and feedback from supervisors, safety experts and parents.”
Character.AI’s co-founders, both former Google AI researchers, returned to the tech giant in 2024 through a licensing deal that gave Google access to the startup’s underlying AI models.
The settlement comes amid growing concerns about AI chatbots and their interactions with vulnerable users.
Giant OpenAI announced in October that about 1.2 million of its 800 million weekly ChatGPT users are active discuss suicide weekly on its platform.
Scrutiny became even tighter in December when the estate of an 83-year-old Connecticut woman was discovered OpenAI and Microsoft suedclaiming that ChatGPT validated delusions that preceded a murder-suicide, marking the first case of an AI system being linked to a murder.
Yet the company perseveres. It’s been that way ever since launched ChatGPT Healtha feature that allows users to link their medical records and wellness data, a move that is drawing criticism from privacy advocates over its handling of sensitive health information.
Declutter has contacted Google and Character.AI for further comment.
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

