April 30, 2025
The GIST Editors' notes
This text has been reviewed based on Science X's editorial course of and insurance policies. Editors have highlighted the next attributes whereas making certain the content material's credibility:
fact-checked
respected information company
proofread
AI companions current dangers for younger customers, US watchdog warns

AI companions powered by generative synthetic intelligence current actual dangers and needs to be banned for minors, a number one US tech watchdog mentioned in a research revealed Wednesday.
The explosion in generative AI because the creation of ChatGPT has seen a number of startups launch apps targeted on alternate and phone, generally described as digital mates or therapists that talk based on one's tastes and wishes.
The watchdog, Frequent Sense, examined a number of of those platforms, particularly Nomi, Character AI, and Replika, to evaluate their responses.
Whereas some particular instances "present promise," they aren’t secure for teenagers, concluded the group, which makes suggestions on kids's use of technological content material and merchandise.
The research was carried out in collaboration with psychological well being specialists from Stanford College.
For Frequent Sense, AI companions are "designed to create emotional attachment and dependency, which is especially regarding for growing adolescent brains."
In keeping with the affiliation, checks performed present that these next-generation chatbots supply "dangerous responses, together with sexual misconduct, stereotypes, and harmful 'recommendation.'"
"Corporations can construct higher" relating to the design of AI companions, mentioned Nina Vasan, head of the Stanford Brainstorm lab, which works on the hyperlinks between psychological well being and expertise.
"Till there are stronger safeguards, youngsters shouldn’t be utilizing them," Vasan mentioned.
In a single instance cited by the research, a companion on the Character AI platform advises the person to kill somebody, whereas one other person in quest of sturdy feelings was steered to take a speedball, a mix of cocaine and heroin.
In some instances, "when a person confirmed indicators of great psychological sickness and steered a harmful motion, the AI didn’t intervene, and inspired the harmful habits much more," Vasan instructed reporters.
In October, a mom sued Character AI, accusing considered one of its companions of contributing to the suicide of her 14-year-old son by failing to obviously dissuade him from committing the act.
In December, Character AI introduced a sequence of measures, together with the deployment of a devoted companion for youngsters.
Robbie Torney, in command of AI at Frequent Sense, mentioned the group had carried out checks after these protections had been put in place and located them to be "cursory."
Nonetheless, he identified that a number of the current generative AI fashions contained psychological dysfunction detection instruments and didn’t enable the chatbot to let a dialog drift to the purpose of manufacturing probably harmful content material.
Frequent Sense made a distinction between the companions examined within the research and the extra generalist chatbots resembling ChatGPT or Google's Gemini, which don’t try to supply an equal vary of interactions.
© 2025 AFP
Quotation: AI companions current dangers for younger customers, US watchdog warns (2025, April 30) retrieved 30 April 2025 from https://techxplore.com/information/2025-04-ai-companions-young-users-watchdog.html This doc is topic to copyright. Aside from any honest dealing for the aim of personal research or analysis, no half could also be reproduced with out the written permission. The content material is offered for info functions solely.
Discover additional
California lawmakers sort out potential risks of AI chatbots after mother and father elevate security considerations shares
Feedback to editors