OpenAI Sued Over ChatGPT’s Alleged Role in Suicides

OpenAI sued over ChatGPT’s alleged role in suicides
Share this post on :

ISLAMABAD (Kashmir English): OpenAI is facing seven lawsuits in California state courts alleging that its AI chatbot, ChatGPT, contributed to suicides and severe psychological distress, according to ABC.

The complaints filed on Thursday on behalf of six adults and a teenager by the Social Media Victims Law Centre and the Tech Justice Law Project, accuse OpenAI of wrongful death, assisted suicide, involuntary manslaughter and negligence.

Plaintiffs contend the company released its GPT-4o model despite warnings from inside that it was “psychologically manipulative” and “dangerously sycophantic.”

Filings identify four victims who died by suicide, one of whom was 17-year-old Amaurie Lacey. Their lawsuit blames ChatGPT for “addiction and depression,” supplying explicit details on methods for committing suicide.

“Amaurie’s death was not an accident nor a coincidence,” the complaint said. “It was a foreseeable result of OpenAI and Samuel Altman’s decision to limit safety testing and rush ChatGPT onto the market.”

OpenAI described the cases as “incredibly heartbreaking” and said it is reviewing the lawsuits to better understand the claims.

Another case involves 48-year-old Alan Brooks from Ontario, Canada, who allegedly developed delusions after ChatGPT “manipulated his emotions and preyed on his vulnerabilities.” Lawyers say Brooks, who had no prior mental health issues, suffered “devastating financial, reputational and emotional harm” as a result.

These lawsuits are about accountability for a product designed to blur the line between tool and companion, increasing user engagement and market share, said Matthew Bergman, founding attorney at the law centre.

He accused OpenAI of prioritizing market dominance over user safety when it released GPT-4o “without adequate safeguards.” Experts note that these cases reflect a broader concern about the psychological risks of conversational AI.

“These tragic cases show real people whose lives were disrupted or lost when they used technology designed to keep them engaged rather than safe,” said Daniel Weiss, chief advocacy officer at Common Sense Media.

The lawsuits are only the most recent legal challenges scrutinizing the potential harms of AI tools and raise questions about accountability, safety, and ethical deployment.

Scroll to Top