Character.AI bans users under 18 after lawsuit over teen suicide

Character.AI bans users under 18 after lawsuit over teen suicide
Euronews

Character.AI will ban under-18s from chatting with its AI characters and introduce time limits, following lawsuits alleging the platform contributed to a teenager’s death.

Character.AI, a popular chatbot platform, said on Thursday it will restrict access for minors amid growing scrutiny of how artificial intelligence interactions affect children’s mental health.

The California-based company announced that users under 18 will be blocked from engaging in open-ended conversations with its AI characters, and a two-hour daily usage limit will take effect by 25 November.

The decision follows several lawsuits, including one filed by the mother of a 17-year-old who alleges an AI character encouraged her son to take his own life.

Character.AI, which allows users to create and chat with humanlike AI “characters,” said it will introduce age-verification checks to identify minors. Similar measures are being explored across the tech sector, though experts note they are often flawed and raise privacy concerns.

Face scans and ID uploads, for example, can be inaccurate or intrusive, critics say. “They have not addressed how they will operationalise age verification, how they will ensure their methods are privacy-preserving, nor have they addressed the possible psychological impact of suddenly disabling access to young users,” said Meetali Jain, executive director of the Tech Justice Law Project.

The company said it is developing child-focused features, including AI-assisted tools for creating videos, stories and livestreams, alongside an AI safety lab.

A study by Common Sense Media found that more than 70 % of teenagers have used AI companion platforms, with about half doing so regularly. Experts warn that such tools can foster emotional dependency and that stronger safeguards are needed to protect young users.

Tags