September 12, 2025
The GIST US regulator probes AI chatbots over child safety concerns
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
reputable news agency
proofread

The US Federal Trade Commission announced Thursday it has launched an inquiry into AI chatbots that act as digital companions, focusing on potential risks to children and teenagers.
The consumer protection agency issued orders to seven companies—including tech giants Alphabet, Meta, OpenAI and Snap—seeking information about how they monitor and address negative impacts from chatbots designed to simulate human relationships.
"Protecting kids online is a top priority for the FTC," said Chairman Andrew Ferguson, emphasizing the need to balance child safety with maintaining US leadership in artificial intelligence innovation.
The inquiry targets chatbots that use generative AI to mimic human communication and emotions, often presenting themselves as friends or confidants to users.
Regulators expressed particular concern that children and teens may be especially vulnerable to forming relationships with these AI systems.
The FTC is using its broad investigative powers to examine how companies monetize user engagement, develop chatbot personalities, and measure potential harm.
The agency also wants to know what steps firms are taking to limit children's access and comply with existing privacy laws protecting minors online.
Companies receiving orders include Character.AI, Elon Musk's xAI Corp, and others operating consumer-facing AI chatbots.
The investigation will examine how these platforms handle personal information from user conversations and enforce age restrictions.
The commission voted unanimously to launch the study, which does not have a specific law enforcement purpose but could inform future regulatory action.
The probe comes as AI chatbots have grown increasingly sophisticated and popular, raising questions about their psychological impact on vulnerable users, particularly young people.
Last month the parents of Adam Raine, a teenager who committed suicide in April at age 16, filed a lawsuit against OpenAI, accusing ChatGPT of giving their son detailed instructions on how to carry out the act.
Shortly after the lawsuit emerged, OpenAI announced it was working on corrective measures for its world-leading chatbot.
The San Francisco-based company said it had notably observed that when exchanges with ChatGPT are prolonged, the chatbot no longer systematically suggests contacting a mental health service if the user mentions having suicidal thoughts.
© 2025 AFP
Citation: US regulator probes AI chatbots over child safety concerns (2025, September 12) retrieved 12 September 2025 from https://techxplore.com/news/2025-09-probes-ai-chatbots-child-safety.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
FTC launces inquiry into AI chatbots acting as companions and their effects on children
Feedback to editors