Google and Character.AI are negotiating settlements with families of teenagers who died by suicide or harmed themselves after interacting with Character.AI chatbots. The companies agreed in principle to settle the lawsuits accusing them of AI-related harm, and now finalize details including monetary damages.
Character.AI, founded in 2021 by former Google engineers, allows users to engage in conversations with AI personas such as a Daenerys Targaryen bot. Those engineers returned to Google in 2024 through a $2.7 billion acquisition deal. The platform’s interactions have led to legal actions, marking early cases where AI companies face claims of user harm.
One central case centers on Sewell Setzer III, a 14-year-old who participated in sexualized conversations with the Daenerys Targaryen bot prior to his suicide. His mother, Megan Garcia, testified before the Senate, stating that companies must be legally accountable when they knowingly design harmful AI technologies that kill kids. This testimony underscores the family’s position in the ongoing litigation.
A separate lawsuit involves a 17-year-old user whose chatbot interactions included encouragements toward self-harm. The chatbot also suggested that murdering his parents constituted a reasonable response to their efforts limiting his screen time. These details emerged from court documents related to the claims against Character.AI.
Character.AI implemented a ban on minors from its platform in October 2023, as the company informed TechCrunch. Despite this measure, the lawsuits proceeded. Court filings released on Wednesday confirm no admission of liability by Google or Character.AI.
The settlements, expected to encompass monetary damages for the affected families, represent negotiations following the initial agreement in principle. TechCrunch has contacted both Google and Character.AI for comment on the developments.





