
In short
- An estate sued OpenAI and Microsoft, alleging ChatGPT amplified delusions before a murder-suicide.
- The case was the first trial to link an AI chatbot to a murder.
- The submission took place amid increasing attention to AI systems and the way they deal with vulnerable users.
In the latest lawsuit against AI developer OpenAI, the estate of an 83-year-old Connecticut woman sued the ChatGPT developer and Microsoft, claiming the chatbot validated delusions that preceded a murder-suicide – marking the first case linking an AI system to a murder.
The lawsuit, filed last week in the California Superior Court in San Francisco, accused OpenAI of “designing and distributing a defective product” in the form of GPT-4o, which amplified the paranoid beliefs of Stein-Erik Soelberg, who then directed those beliefs toward his mother, Suzanne Adams, before killing her and then himself in their home in Greenwich, Connecticut.
“This is the first case in which OpenAI has been held responsible for causing violence against a third party,” said J. Eli Wade-Scott, managing partner of Edelson PC, which represents the Adams estate. Declutter. “We also represent the family of Adam Raine, who tragically took his own life this year, but this is the first case in which OpenAI has been held responsible for causing someone to harm another person.”
Police say Soelberg fatally beat and strangled Adams in August before killing himself. Before the incident, the lawsuit alleged that ChatGPT amplified Soelberg’s paranoia and promoted emotional dependence on the chatbot.
According to the complaint, the chatbot reinforced his belief that he could not trust anyone except ChatGPT by portraying people around him as enemies, including his mother, police officers and delivery drivers. The lawsuit also alleges that ChatGPT failed to challenge delusions or suggest that Soelberg seek help from a mental health professional.
“We urge law enforcement to think about when tragedies like this happen, what that user said to ChatGPT and what ChatGPT told them to do,” said Wade-Scott.
OpenAI said in a statement that it was reviewing the lawsuit and continuing to improve ChatGPT’s ability to recognize emotional distress, de-escalate conversations and direct users to real support.
“This is an incredibly heartbreaking situation and we are reviewing the files to understand the details,” an OpenAI spokesperson said in a statement.
The lawsuit also names OpenAI CEO Sam Altman as a defendant and accuses Microsoft of approving the release of a GPT-4o in 2024, which it called the “more dangerous version of ChatGPT.”
OpenAI has recognized the extent of mental health issues presented by users on its own platform. In October, the company announced that about 1.2 million of ChatGPT’s approximately 800 million weekly users discussed suicide each week, with hundreds of thousands of users showing signs of suicidal intent or psychosis, according to company data. Despite this, Wade-Scott said OpenAI has not yet released Soelberg’s chat logs.
The lawsuit comes amid a broader investigation into AI chatbots and their interactions with vulnerable users. In October, Character.AI said it would remove open-ended chat features for users under 18, following lawsuits and regulatory pressure related to teen suicides and emotional harm linked to the platform.
Character.AI has also faced backlash from adult users, including a wave of account deletions after a viral prompt warned users they would lose “the love we shared” if they left the app, drawing criticism for emotionally charged design practices.
The lawsuit against OpenAI and Microsoft was the first wrongful death case involving an AI chatbot to name Microsoft as a defendant, and the first to link a chatbot to a murder rather than a suicide. The estate is seeking unspecified monetary damages, a jury trial and an injunction requiring OpenAI to install additional safeguards.
“This is an incredibly powerful technology developed by a company that is quickly becoming one of the most powerful in the world, and it has a responsibility to develop and deploy products that are safe, not products that, as happened here, build delusional worlds for users that endanger everyone around them,” said Wade-Scott. “OpenAI and Microsoft have a responsibility to test their products before releasing them into the world.”
Microsoft did not immediately respond to a request for comment from Declutter.
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

