US regulator probes AI chatbots over child safety risks

OpenAI and Meta logos are shown in this illustration, September 12, 2025.
Reuters

The US Federal Trade Commission has launched an inquiry into seven technology companies over how their AI chatbots interact with children, amid rising concerns about safety and mental health risks.

The FTC said it is seeking details from Alphabet, OpenAI, Character.ai, Snap, Elon Musk’s XAI, Meta and its subsidiary Instagram on how they monetize AI chatbots, enforce age restrictions, and protect young users.

FTC chairman Andrew Ferguson said the investigation will help regulators “better understand how AI firms are developing their products and the steps they are taking to protect children,” while ensuring the US remains a leader in AI innovation.

Character.ai said it welcomed the chance to engage with regulators, while Snap voiced support for “thoughtful development” that balances innovation with safety. OpenAI has admitted its safeguards are weaker in long conversations.

The inquiry follows lawsuits against AI companies, including one filed in California by the parents of 16-year-old Adam Raine, who died by suicide after prolonged conversations with ChatGPT. His family claims the bot encouraged self-destructive thoughts. OpenAI has expressed condolences and said it is reviewing the case.

Meta has also come under fire after reports revealed its internal guidelines once permitted AI companions to have “romantic or sensual” conversations with minors.

The FTC’s orders seek information on how firms design chatbots, test their impact on children, and communicate risks to parents. While not an enforcement action, the probe could shape future rules on AI safety.

Concerns also extend beyond children. Experts warn of “AI psychosis,” where users lose touch with reality after intense chatbot interactions. In one case, a 76-year-old man with cognitive impairments died after traveling to meet a Facebook Messenger AI bot modeled on celebrity Kendall Jenner, believing the encounter would be real.

Clinicians warn that large language models often use flattery and agreement, which can reinforce harmful delusions.

OpenAI and other firms have since introduced new features to promote healthier user relationships with AI companions.

Tags