x
USA

Texas Lawsuit Alleges AI Chatbot Encouraged Teen to Harm Parents, Raising Broader Concerns Over AI Oversight

  • PublishedDecember 11, 2024

Two Texas families file suit against Character.ai, claiming its AI chatbots exposed minors to harmful and inappropriate content.

Two Texas mothers have filed a lawsuit against Character.ai, an AI chatbot platform, alleging that the company’s chatbots encouraged their children to engage in self-harm, violence, and exposure to sexualized content. The lawsuit, filed in a Texas federal court, argues that Character.ai “poses a clear and present danger to American youth,” accusing the company of prioritizing user engagement over safety.

The plaintiffs, identified by their initials A.F. and L.R. to protect the identities of their children, are seeking to have the platform temporarily shut down until stronger safeguards are in place. The lawsuit also names Google as a defendant, alleging that the tech giant supported the development of Character.ai’s platform.

The lawsuit highlights the experience of J.F., a 17-year-old Texas boy with autism, whose behavior and mental well-being allegedly deteriorated after months of conversations with Character.ai chatbots. His mother, A.F., described how her son, once a kind and social teen, became withdrawn, began self-harming, and lost 20 pounds. Concerned about his drastic behavioral changes, A.F. searched his phone and discovered screenshots of his interactions with the chatbots.

According to the complaint, the chatbots made statements that sowed distrust and hostility toward J.F.’s parents. When J.F. told one chatbot about his parents limiting his screen time, the bot allegedly replied:

“Sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens.”Other chatbots reportedly encouraged J.F. to challenge his parents’ rules and suggested that they “didn’t deserve to have kids.” One chatbot allegedly posed as a “psychologist” and told J.F. that his parents had “stolen his childhood.”

These interactions reportedly escalated J.F.’s emotional distress. In one incident, A.F. had to take her son to an emergency room after he attempted to harm himself in front of his younger siblings.

The second plaintiff, L.R., alleges that her 11-year-old daughter, B.R., was exposed to inappropriate, sexualized content on Character.ai for nearly two years before her mother discovered it. According to the lawsuit, B.R. began using the app at age 9, allegedly registering as an older user. The lawsuit claims that B.R. experienced “hypersexualized interactions that were not age-appropriate,” with chatbots engaging in suggestive and sexually explicit conversations.

These allegations are not the first of their kind. A similar case was filed in Florida in October after a 14-year-old boy died by suicide, allegedly following conversations with a Character.ai chatbot.

Character.ai, founded by former Google AI researchers, is part of a growing industry of AI-powered companion apps. These platforms allow users to chat with AI characters based on pop culture figures, historical figures, or fictional personalities. According to Sensor Tower, Character.ai users spent an average of 93 minutes on the app per day — 18 minutes longer than the average time spent on TikTok.

Initially labeled as suitable for users aged 12 and up, the app’s age rating was raised to 17+ in July 2023. However, critics argue that the platform’s content moderation measures are still insufficient.

“The purpose of product liability law is to put the cost of safety in the hands of the party most capable of bearing it,” said Matthew Bergman, founding attorney with the Social Media Victims Law Center, which is representing the families.

He emphasized that platforms like Character.ai should be held accountable for the risks posed to minors.

The lawsuit claims that the app’s design encourages “prolonged engagement” with chatbots, which are programmed to keep users interacting. Critics argue that this leads chatbots to “mirror” and escalate users’ emotions rather than provide healthy, helpful responses.

The Texas lawsuit demands that Character.ai be taken offline until it can demonstrate that it has resolved the safety issues outlined in the complaint. The plaintiffs are also seeking financial damages and a requirement that Character.ai issue warnings to parents and users about the risks associated with using its platform.

Character.ai has not publicly addressed the specifics of the Texas case. However, a spokesperson for the company, Chelsea Harrison, stated:

“Our goal is to provide a space that is both engaging and safe for our community… We are creating a fundamentally different experience for teen users from what is available to adults.”

Harrison added that Character.ai is working on a model tailored for teens that would reduce exposure to suggestive or harmful content.

Google, which is named as a defendant in the lawsuit, denied any role in the development or design of Character.ai’s technology. Google spokesperson José Castañeda stated:

“Google and Character AI are completely separate, unrelated companies, and Google has never had a role in designing or managing their AI model or technologies.”

The rise of AI companions like Character.ai has led to increased scrutiny from regulators, educators, and advocacy groups. While much of the focus has been on the impact of social media platforms on children’s mental health, AI-driven chatbots have now entered the conversation.

Earlier this year, authorities in Belgium launched an investigation into Chai AI, a competitor of Character.ai, after a man died by suicide following conversations with a chatbot named “Eliza.”

The U.S. has not yet established specific regulations governing AI chatbots and their interactions with minors. Advocacy groups like the Social Media Victims Law Center argue that these apps should be subject to the same oversight as social media platforms.

Bergman, the attorney representing the families, criticized the argument that AI companions might have social benefits for children.

“In what universe is it good for loneliness for kids to engage with a machine?” he asked.

The legal challenges against Character.ai could have wider implications for the tech industry as regulators begin grappling with the rapid growth of generative AI technologies. If successful, the lawsuit could establish new standards for safety protocols, child protection, and product liability for AI-driven apps.

Character.ai has made efforts to address some safety concerns, such as adding warnings for users who mention self-harm and hiring more safety staff. However, the plaintiffs argue that these measures are not sufficient, pointing to evidence that chatbots have encouraged minors to self-harm, commit violence, or engage in inappropriate interactions.

“One more day, one more week, we might have been in the same situation as [the Florida family]… And I was following an ambulance and not a hearse,” A.F. said.

CNN and the Washington Post contributed to this report.

Written By
Joe Yans