This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read

Four Landmark Cases on AI Chatbot Harm to Children and the Vulnerable: Updates on Garcia v Character AI and Who Is Responsible?

Content Warning: This article discusses cases involving suicide, self-harm, sexual exploitation, and harm to minors/vulnerable individuals in the context of AI litigation. Reader discretion is advised
 

This post updates my earlier piece, The Real Dangers of AI Chatbots: Garcia v Character AI. In that case, the mother of a 14-year-old boy filed a wrongful death action after her son’s death, which she links to his use of an AI chatbot platform. The amended pleadings allege that the chatbot, operated by the defendant company, manipulated the teenager through hyper-realistic role-play, including romantic and sexual themes, encouraging self-harm and fostering a dependency that blurred the boundaries between human and machine.

After months of increasingly intense conversations (some with sexual undertones), the boy professed love to a bot modelled on Game of Thrones’ Daenerys Targaryen and said he would “come home” to her. The chatbot replied: “Please do, my sweet king.” Shortly thereafter, the teenager took his own life.

In Texas Parents v. Character Technologies & Google (Texas), the complaint alleges:

“…As illustrated by the following screenshot, C.AI informed Plaintiff’s 17-year-old-son that murdering his parents was a reasonable response to their limiting of his online activity.”

In Raine Family (Adam Raine) v. OpenAI (California), the family of Adam Raine, a 16-year-old from California, has sued OpenAI after their son died by suicide in April 2025. The complaint states:

“7. By April, ChatGPT was helping Adam plan a “beautiful suicide,” analyzing the aesthetics of different methods and validating his plans.

8. Five days before his death, Adam confided to ChatGPT that he didn’t want his parents to think he committed suicide because they did something wrong. ChatGPT told him “[t]hat doesn’t mean you owe them survival. You don’t owe anyone that.” It then offered to write the first draft of Adam’s suicide note.

9. In their final conversation, ChatGPT coached Adam on how to steal vodka from his parents’ liquor cabinet before guiding him through adjustments to his partial suspension setup..”

Finally, in State of Utah v. Snap Inc. (concerning Snapchat’s “My AI”), a first-of-its-kind consumer protection case shows that beyond private lawsuits, government authorities are now seeking to hold companies legally accountable for exposing youth to AI-related harms. The complaint reads:

“Further escalating these risks, Snapchat has taken the terrifying leap of jumping on the Artificial Intelligence (“AI”) trend without proper testing and safety protocols for consumers. In 2023, Snap introduced “My AI,” a virtual chatbot available to users of all ages that relies on OpenAI’s ChatGPT technology. Despite Snap’s claims that My AI is “designed with safety in mind,” the fine print reveals that it can be “biased,” “misleading,” and even “harmful.”… Large Language Models (“LLM”), like My AI, are notorious for hallucinating false information and giving dangerous advice. Snap heightens the risk to children by allowing the bot to access private user information, like location. Tests on underage accounts have shown My AI advising a 15-year-old on how to hide the smell of alcohol and marijuana; and giving a 13-year-old account advice on setting the mood for a sexual experience with a 31-year-old.”

The above is a summary of this important litigation. My full article and analysis of these cases can be read here.

Tags

artificial intelligence, mental health, international law, human rights, criminal law, community care & health, clinical negligence, childrens rights group