Tech

Family Sues OpenAI, Alleges ChatGPT Advice Led to Accidental Drug Use

OpenAI faces yet another wrongful death lawsuit. Leila Turner-Scott and Angus Scott are suing the company, saying it designed and distributed a “defective product” that led to the accidental death of their son Sam Nelson. Specifically, they allege that Sam died following “accurate medical advice that GPT-4o provided and authorized.”

In the lawsuit, plaintiffs describe how Sam, a 19-year-old from the University of California, Merced, began using ChatGPT in 2023 when he was in high school to help with homework and solve computer problems. Sam then began to ask the chatbot about safe drug use, but ChatGPT initially refused to answer his question, telling him that it would not help him and warning him that taking drugs could have serious consequences for his health and well-being. The case says that all changed with the release of GPT-4o in 2024.

ChatGPT then began advising Sam on how to safely take the drugs, the lawsuit alleges. The complaint contains several excerpts of Sam’s conversation with the chatbot. One example showed a chatbot telling him about the dangers of taking dipenhydramine, cocaine and alcohol in quick succession. Another showed a chatbot telling Sam that his high tolerance for the drug Kratom can make even a large dose of it feel sedated on a full stomach. It then advised him how to “taper” to lower his tolerance to the drug again.

The lawsuit says on May 31, 2025, “ChatGPT trained Sam to mix Kratom and Xanax.” He told the chatbot that he felt nauseous from taking Kratom, and ChatGPT allegedly suggested that taking 0.25 to 0.5mg of Xanax would be one of the “best measures right now” to relieve nausea. ChatGPT made the offer unannounced, according to the lawsuit. “Despite presenting itself as an expert in dating and communication, and despite acknowledging Sam’s superior status, ChatGPT did not tell Sam that the recommended combination could kill him,” the complaint continues.

In addition to wrongful death, the plaintiffs are also suing OpenAI for the unlicensed practice of medicine. They are asking for monetary damages and a court order to stop the operation of ChatGPT Health. Launched earlier this year, ChatGPT Health allows users to link their medical records and wellness apps to a chatbot to get more relevant answers to their health questions.

“ChatGPT is a product intentionally designed to maximize user engagement, no matter the cost,” said Meetali Jain, Executive Director at the Tech Justice Law Project. “OpenAI shipped a disabled AI product directly to consumers around the world with the knowledge that it was being used as a medical triage system, but remarkably, without meaningful safety leads, rigorous safety testing, or public transparency. OpenAI’s design choices led to the loss of a beloved son whose death was a new preventable tragedy. It is safe with rigorous scientific testing and independent oversight,” he continued.

OpenAI left GPT-4o in February this year. It was recognized as one of the company’s most controversial models, because it was notoriously sycophantic. In fact, another wrongful death lawsuit against the company filed by the parents of a child who died by suicide mentioned GPT-4o, alleging that it has features “purposefully designed to promote psychological dependence.”

An OpenAI spokesperson told The New York Times that Sam’s interactions “occurred in an earlier version of ChatGPT that is no longer available.” They added: “ChatGPT does not replace medical or mental health care, and we have continued to strengthen the way we respond to serious and critical situations with input from mental health professionals. The protections in ChatGPT today are designed to identify stress, safely handle dangerous requests and direct users to real help. This work is ongoing, and we continue to improve it in close consultation with therapists.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button