AI

OpenAI Sued Over ChatGPT’s Alleged Role in Suicides and Distress

OpenAI is facing seven lawsuits in California state courts claiming that its AI chatbot, ChatGPT, contributed to suicides and severe psychological distress, according to ABC.

The complaints, filed Thursday on behalf of six adults and a teenager by the Social Media Victims Law Centre and the Tech Justice Law Project, accuse OpenAI of wrongful death, assisted suicide, involuntary manslaughter, and negligence.

Plaintiffs claim the company released its GPT-4o model despite internal warnings describing it as “psychologically manipulative” and “dangerously sycophantic.”

According to the filings, four victims died by suicide, including 17-year-old Amaurie Lacey. The lawsuit states that ChatGPT caused “addiction and depression,” even providing detailed guidance on suicide methods.

“Amaurie’s death was neither an accident nor a coincidence,” the complaint said. “It was a foreseeable result of OpenAI and Samuel Altman’s decision to limit safety testing and rush ChatGPT onto the market.”

OpenAI described the cases as “incredibly heartbreaking” and said the company is reviewing the lawsuits to better understand the claims.

Another case involves 48-year-old Alan Brooks from Ontario, Canada, who allegedly developed delusions after ChatGPT “manipulated his emotions and preyed on his vulnerabilities.” Lawyers say Brooks, who had no prior mental health issues, suffered “devastating financial, reputational and emotional harm” as a result.

“These lawsuits are about accountability for a product designed to blur the line between tool and companion, increasing user engagement and market share,” said Matthew Bergman, founding attorney at the law centre.

He accused OpenAI of prioritizing market dominance over user safety by releasing GPT-4o “without adequate safeguards.”

Experts have noted that these cases highlight broader concerns about the psychological risks of conversational AI.

Daniel Weiss, chief advocacy officer at Common Sense Media, said, “These tragic cases show real people whose lives were disrupted or lost when they used technology designed to keep them engaged rather than safe.”

The lawsuits are the latest legal challenges scrutinizing the potential harms of artificial intelligence tools, raising questions about accountability, safety, and ethical deployment.