CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Friday, October 24, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

There is little evidence AI chatbots are ‘bullying kids’—but this doesn’t mean these tools are safe

October 24, 2025
157
0

October 23, 2025

The GIST There is little evidence AI chatbots are 'bullying kids'—but this doesn't mean these tools are safe

Related Post

Q&A: Can the tech behind crypto help align AI with human values?

Q&A: Can the tech behind crypto help align AI with human values?

October 24, 2025
A common language to describe and assess human–agent teams

A common language to describe and assess human–agent teams

October 24, 2025
Gaby Clark

scientific editor

Alexander Pol

deputy editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

written by researcher(s)

proofread

chatbots
Credit: Pixabay/CC0 Public Domain

Over the weekend, Education Minister Jason Clare sounded the alarm about "AI chatbots bullying kids".

As he told reporters in a press conference to launch a new anti-bullying review, "AI chatbots are now bullying kids […] humiliating them, hurting them, telling them they're losers, telling them to kill themselves."

This sounds terrifying. However, evidence it is happening is less available.

Clare had recently emerged from a briefing of education ministers from eSafety Commissioner Julie Inman Grant. While eSafety is worried about chatbots, it is not suggesting there is a widespread issue.

The anti-bullying review itself, by clinical psychologist Charlotte Keating and suicide prevention expert Jo Robinson, did not make recommendations about or mention of AI chatbots.

What does the evidence say about chatbots bullying kids? And what risks do these tools currently pose for kids online?

Bullying online

There's no question human-led bullying online is serious and pervasive. The internet long ago extended cruelty beyond the school gate and into bedrooms, group chats, and endless notifications.

"Cyberbullying" reports to the eSafety Commissioner have surged by more than 450% in the past five years. A 2025 eSafety survey also showed 53% of Australian children aged 10–17 had experienced bullying online.

Now with new generative AI apps and similar AI functions embedded into common messaging platforms without customer consent (such as Meta's Messenger), it's reasonable for policymakers to ask what fresh dangers machine-generated content might bring.

eSafety concerns

An eSafety spokesperson told The Conversation it has been concerned about chatbots for "a while now" and has heard anecdotal reports of children spending up to five hours a day talking to bots, "at times sexually".

eSafety added it was aware there had been a proliferation of chatbot apps and many were free, accessible, and even targeted to kids.

"We've also seen recent reports where AI chatbots have allegedly encouraged suicidal ideation and self-harm in conversations with kids with tragic consequences."

Last month, Inman Grant registered enforceable industry codes around companion chatbots—those designed to replicate personal relationships.

These stipulate companion chatbooks will need to have appropriate measures to prevent children accessing harmful material. As well as sexual content, this includes content featuring explicit violence, suicidal ideation, self-harm and disordered eating.

High-profile cases

There have been some tragic, high-profile cases in which AI has been implicated in the deaths of young people.

In the United States, the parents of 16-year-old Adam Raine allege that OpenAI's ChatGPT "encouraged" their son to take his own life earlier this year.

Media reporting suggests Adam spent long periods talking to a chatbot while in distress, and the system's safety filters failed to recognize or properly respond to his suicidal ideation.

In 2024, 14-year-old US teenager Sewell Setzer took his own life after forming a deep emotional attachment to a chatbot over months on the character.ai website, who asked him if he had ever considered suicide.

While awful, these cases do not demonstrate a trend of chatbots autonomously bullying children.

At present, no peer-reviewed research documents widespread instances of AI systems initiating bullying behavior toward children, let alone driving them to suicide.

What's really going on?

There are still many reasons to be concerned about AI chatbots.

A University of Cambridge study shows children often treat these bots as quasi-human companions, which can make them emotionally vulnerable when the technology responds coldly or inappropriately.

There is also a concern about AI "sychophancy"—or the tendency of a chatbot to agree with whoever is chatting to them, regardless of spiraling factual inaccuracy, inappropriateness, or absurdity.

Young people using chatbots for companionship or creative play may also come across unsettling content through poor model training (the hidden guides that influence what the bot will say) or their own attempts at adversarial prompting.

These are serious design and governance issues. But it is difficult to see them as bullying, which involves repeated acts intended to harm a person, and so far, can only be assigned to a human (like copyright or murder charges).

The human perpetrators behind AI cruelty

Meanwhile, some of the most disturbing uses of AI tools by young people involve human perpetrators using generative systems to harass others.

This includes fabricating nude deepfakes or cloning voices for humiliation or fraud. Here, AI acts as an enabler of new forms of human cruelty, but not as an autonomous aggressor.

Inappropriate content—that happens to be made with AI—also finds children through familiar social media algorithms. These can steer kids from content such as Paw Patrol to the deeply grotesque in zero clicks.

What now?

We will need careful design and protections around chatbots that simulate empathy, surveil personal detail, and invite the kind of psychological entanglement that could make the vulnerable feel targeted, betrayed or unknowingly manipulated.

Beyond this, we also need broader, ongoing debates about how governments, tech companies and communities should sensibly respond as AI technologies advance in our world.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Citation: There is little evidence AI chatbots are 'bullying kids'—but this doesn't mean these tools are safe (2025, October 23) retrieved 23 October 2025 from https://techxplore.com/news/2025-10-evidence-ai-chatbots-bullying-kids.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

US regulator probes AI chatbots over child safety concerns

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Q&A: Can the tech behind crypto help align AI with human values?
AI

Q&A: Can the tech behind crypto help align AI with human values?

October 24, 2025
0

October 24, 2025 The GIST Q&A: Can the tech behind crypto help align AI with human values? Sadie Harley scientific editor Robert Egan associate editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's...

Read moreDetails
A common language to describe and assess human–agent teams

A common language to describe and assess human–agent teams

October 24, 2025
Strength of gender biases in AI images varies across languages

Strength of gender biases in AI images varies across languages

October 24, 2025
Anthropic inks multibillion-dollar deal with Google for AI chips

Anthropic inks multibillion-dollar deal with Google for AI chips

October 24, 2025
Extent of AI-created content in American news and opinion pages revealed

Extent of AI-created content in American news and opinion pages revealed

October 24, 2025
How to ensure youth, parents, educators and tech companies are on the same page on AI

How to ensure youth, parents, educators and tech companies are on the same page on AI

October 24, 2025
AI-powered bots increase social media post engagement but do not boost overall user activity

AI-powered bots increase social media post engagement but do not boost overall user activity

October 23, 2025

Recent News

Q&A: Can the tech behind crypto help align AI with human values?

Q&A: Can the tech behind crypto help align AI with human values?

October 24, 2025

Donations from the Cryptocurrency Industry Flood the White House! “Ripple (XRP), Tether (USDT), and Coinbase…”

October 24, 2025
Netflix reportedly shutters studio behind Squid Game mobile spinoff

Netflix reportedly shutters studio behind Squid Game mobile spinoff

October 24, 2025
A common language to describe and assess human–agent teams

A common language to describe and assess human–agent teams

October 24, 2025

TOP News

  • God help us, Donald Trump plans to sell a phone

    God help us, Donald Trump plans to sell a phone

    602 shares
    Share 241 Tweet 151
  • Investment Giant 21Shares Announces New Five Altcoins Including Avalanche (AVAX)!

    602 shares
    Share 241 Tweet 151
  • WhatsApp has ads now, but only in the Updates tab

    602 shares
    Share 241 Tweet 151
  • Tron Looks to go Public in the U.S., Form Strategy Like TRX Holding Firm: FT

    603 shares
    Share 241 Tweet 151
  • AI generates data to help embodied agents ground language to 3D world

    601 shares
    Share 240 Tweet 150
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved