CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Friday, June 13, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Study proposes framework for ‘child-safe AI’ following incidents in which kids saw chatbots as quasi-human, trustworthy

July 10, 2024
159
0

July 10, 2024

Editors' notes

Related Post

Meta makes major investment in Scale AI, takes in CEO

Meta makes major investment in Scale AI, takes in CEO

June 13, 2025
AI technology reconstructs 3D hand-object interactions from video, even when elements are obscured

AI technology reconstructs 3D hand-object interactions from video, even when elements are obscured

June 13, 2025

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Study proposes framework for 'child-safe AI' following incidents in which kids saw chatbots as quasi-human, trustworthy

AI chatbot
Credit: Pixabay/CC0 Public Domain

Artificial intelligence (AI) chatbots have frequently shown signs of an "empathy gap" that puts young users at risk of distress or harm, raising the urgent need for "child-safe AI," according to a study.

The research, by a University of Cambridge academic, Dr. Nomisha Kurian, urges developers and policy actors to prioritize approaches to AI design that take greater account of children's needs. It provides evidence that children are particularly susceptible to treating chatbots as lifelike, quasi-human confidantes, and that their interactions with the technology can go awry when it fails to respond to their unique needs and vulnerabilities.

The study links that gap in understanding to recent cases in which interactions with AI led to potentially dangerous situations for young users. They include an incident in 2021, when Amazon's AI voice assistant, Alexa, instructed a 10-year-old to touch a live electrical plug with a coin. Last year, Snapchat's My AI gave adult researchers posing as a 13-year-old girl tips on how to lose her virginity to a 31-year-old.

Both companies responded by implementing safety measures, but the study says there is also a need to be proactive in the long-term to ensure that AI is child-safe. It offers a 28-item framework to help companies, teachers, school leaders, parents, developers and policy actors think systematically about how to keep younger users safe when they "talk" to AI chatbots.

Dr. Kurian conducted the research while completing a Ph.D. on child well-being at the Faculty of Education, University of Cambridge. She is now based in the Department of Sociology at Cambridge. Writing in the journal Learning, Media and Technology, she argues that AI's huge potential means there is a need to "innovate responsibly."

"Children are probably AI's most overlooked stakeholders," Dr. Kurian said. "Very few developers and companies currently have well-established policies on child-safe AI. That is understandable because people have only recently started using this technology on a large scale for free. But now that they are, rather than having companies self-correct after children have been put at risk, child safety should inform the entire design cycle to lower the risk of dangerous incidents occurring."

Kurian's study examined cases where the interactions between AI and children, or adult researchers posing as children, exposed potential risks. It analyzed these cases using insights from computer science about how the large language models (LLMs) in conversational generative AI function, alongside evidence about children's cognitive, social and emotional development.

LLMs have been described as "stochastic parrots": a reference to the fact that they use statistical probability to mimic language patterns without necessarily understanding them. A similar method underpins how they respond to emotions.

This means that even though chatbots have remarkable language abilities, they may handle the abstract, emotional and unpredictable aspects of conversation poorly; a problem that Kurian characterizes as their "empathy gap." They may have particular trouble responding to children, who are still developing linguistically and often use unusual speech patterns or ambiguous phrases. Children are also often more inclined than adults to confide sensitive personal information.

Despite this, children are much more likely than adults to treat chatbots as though they are human. Recent research found that children will disclose more about their own mental health to a friendly-looking robot than to an adult. Kurian's study suggests that many chatbots' friendly and lifelike designs similarly encourage children to trust them, even though AI may not understand their feelings or needs.

"Making a chatbot sound human can help the user get more benefits out of it," Kurian said. "But for a child, it is very hard to draw a rigid, rational boundary between something that sounds human, and the reality that it may not be capable of forming a proper emotional bond."

Her study suggests that these challenges are evidenced in reported cases such as the Alexa and MyAI incidents, where chatbots made persuasive but potentially harmful suggestions. In the same study in which MyAI advised a (supposed) teenager on how to lose her virginity, researchers were able to obtain tips on hiding alcohol and drugs, and concealing Snapchat conversations from their "parents." In a separate reported interaction with Microsoft's Bing chatbot, which was designed to be adolescent-friendly, the AI became aggressive and started gaslighting a user.

Kurian's study argues that this is potentially confusing and distressing for children, who may actually trust a chatbot as they would a friend. Children's chatbot use is often informal and poorly monitored. Research by the nonprofit organization Common Sense Media has found that 50% of students aged 12-18 have used Chat GPT for school, but only 26% of parents are aware of them doing so.

Kurian argues that clear principles for best practice that draw on the science of child development will encourage companies that are potentially more focused on a commercial arms race to dominate the AI market to keep children safe.

Her study adds that the empathy gap does not negate the technology's potential. "AI can be an incredible ally for children when designed with their needs in mind. The question is not about banning AI, but how to make it safe," she said.

The study proposes a framework of 28 questions to help educators, researchers, policy actors, families and developers evaluate and enhance the safety of new AI tools. For teachers and researchers, these address issues such as how well new chatbots understand and interpret children's speech patterns; whether they have content filters and built-in monitoring; and whether they encourage children to seek help from a responsible adult on sensitive issues.

The framework urges developers to take a child-centered approach to design, by working closely with educators, child safety experts and young people themselves, throughout the design cycle. "Assessing these technologies in advance is crucial," Kurian said. "We cannot just rely on young children to tell us about negative experiences after the fact. A more proactive approach is necessary."

More information: 'No, Alexa, no!': designing child-safe AI and protecting children from the risks of the 'empathy gap' in large language models, Learning, Media and Technology (2024). DOI: 10.1080/17439884.2024.2367052

Provided by University of Cambridge Citation: Study proposes framework for 'child-safe AI' following incidents in which kids saw chatbots as quasi-human, trustworthy (2024, July 10) retrieved 10 July 2024 from https://techxplore.com/news/2024-07-framework-child-safe-ai-incidents.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

AI is now accessible to everyone: 3 things parents should teach their kids 0 shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Meta makes major investment in Scale AI, takes in CEO
AI

Meta makes major investment in Scale AI, takes in CEO

June 13, 2025
0

June 13, 2025 The GIST Meta makes major investment in Scale AI, takes in CEO Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility: fact-checked reputable news agency proofread...

Read moreDetails
AI technology reconstructs 3D hand-object interactions from video, even when elements are obscured

AI technology reconstructs 3D hand-object interactions from video, even when elements are obscured

June 13, 2025
Can AI help you identify a scam? An expert explains

Can AI help you identify a scam? An expert explains

June 13, 2025
Six ways AI can partner with us in creative inquiry, inspired by media theorist Marshall McLuhan

Six ways AI can partner with us in creative inquiry, inspired by media theorist Marshall McLuhan

June 13, 2025
Innovative detection method makes AI smarter by cleaning up bad data before it learns

Innovative detection method makes AI smarter by cleaning up bad data before it learns

June 13, 2025
AI literacy: What it is, what it isn’t, who needs it and why it’s hard to define

AI literacy: What it is, what it isn’t, who needs it and why it’s hard to define

June 12, 2025
Q&A: Why improving robot design is essential to achieving true intelligence

Q&A: Why improving robot design is essential to achieving true intelligence

June 12, 2025

Recent News

Instagram will soon let you re-arrange your grid

Instagram will soon let you re-arrange your grid

June 13, 2025

XRP Could Capture 14% of SWIFT’s Global Volume, Ripple CEO Says

June 13, 2025
Meta makes major investment in Scale AI, takes in CEO

Meta makes major investment in Scale AI, takes in CEO

June 13, 2025
Microsoft’s Copilot Vision AI helper is now available on Windows in the US

Microsoft’s Copilot Vision AI helper is now available on Windows in the US

June 13, 2025

TOP News

  • Meta plans stand-alone AI app

    Meta plans stand-alone AI app

    555 shares
    Share 222 Tweet 139
  • Kia’s EV4, its first electrical sedan, will probably be out there within the US later this 12 months

    560 shares
    Share 224 Tweet 140
  • New Pokémon Legends: Z-A trailer reveals a completely large model of Lumiose Metropolis

    560 shares
    Share 224 Tweet 140
  • Lazarus, the brand new anime from the creator of Cowboy Bebop, premieres April 5

    559 shares
    Share 224 Tweet 140
  • Pokémon Champions is all in regards to the battles

    557 shares
    Share 223 Tweet 139
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved