CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Monday, June 30, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Deaths linked to chatbots show we must urgently revisit what counts as ‘high-risk’ AI

October 31, 2024
150
0

October 31, 2024

Editors' notes

Related Post

Creating a 3D interactive digital room from simple video

Creating a 3D interactive digital room from simple video

June 30, 2025
Meta spending big on AI talent but will it pay off?

Meta spending big on AI talent but will it pay off?

June 30, 2025

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

written by researcher(s)

proofread

Deaths linked to chatbots show we must urgently revisit what counts as 'high-risk' AI

kid on phone
Credit: Unsplash/CC0 Public Domain

Last week, the tragic news broke that US teenager Sewell Seltzer III took his own life after forming a deep emotional attachment to an artificial intelligence (AI) chatbot on the Character.AI website.

As his relationship with the companion AI became increasingly intense, the 14-year-old began withdrawing from family and friends, and was getting in trouble at school.

In a lawsuit filed against Character.AI by the boy's mother, chat transcripts show intimate and often highly sexual conversations between Sewell and the chatbot Dany, modeled on the Game of Thrones character Danaerys Targaryen. They discussed crime and suicide, and the chatbot used phrases such as "that's not a reason not to go through with it."

This is not the first known instance of a vulnerable person dying by suicide after interacting with a chatbot persona. A Belgian man took his life last year in a similar episode involving Character.AI's main competitor, Chai AI. When this happened, the company told the media they were "working our hardest to minimize harm."

In a statement to CNN, Character.AI has stated they "take the safety of our users very seriously" and have introduced "numerous new safety measures over the past six months."

In a separate statement on the company's website, they outline additional safety measures for users under the age of 18. (In their current terms of service, the age restriction is 16 for European Union citizens and 13 elsewhere in the world.)

However, these tragedies starkly illustrate the dangers of rapidly developing and widely available AI systems anyone can converse and interact with. We urgently need regulation to protect people from potentially dangerous, irresponsibly designed AI systems.

How can we regulate AI?

The Australian government is in the process of developing mandatory guardrails for high-risk AI systems. A trendy term in the world of AI governance, "guardrails" refer to processes in the design, development and deployment of AI systems. These include measures such as data governance, risk management, testing, documentation and human oversight.

One of the decisions the Australian government must make is how to define which systems are "high-risk," and therefore captured by the guardrails.

The government is also considering whether guardrails should apply to all "general purpose models." General purpose models are the engine under the hood of AI chatbots like Dany: AI algorithms that can generate text, images, videos and music from user prompts, and can be adapted for use in a variety of contexts.

In the European Union's groundbreaking AI Act, high-risk systems are defined using a list, which regulators are empowered to regularly update.

An alternative is a principles-based approach, where a high-risk designation happens on a case-by-case basis. It would depend on multiple factors such as the risks of adverse impacts on rights, risks to physical or mental health, risks of legal impacts, and the severity and extent of those risks.

Chatbots should be 'high-risk' AI

In Europe, companion AI systems like Character.AI and Chai are not designated as high-risk. Essentially, their providers only need to let users know they are interacting with an AI system.

It has become clear, though, that companion chatbots are not low risk. Many users of these applications are children and teens. Some of the systems have even been marketed to people who are lonely or have a mental illness.

Chatbots are capable of generating unpredictable, inappropriate and manipulative content. They mimic toxic relationships all too easily. Transparency—labeling the output as AI-generated—is not enough to manage these risks.

Even when we are aware that we are talking to chatbots, human beings are psychologically primed to attribute human traits to something we converse with.

The suicide deaths reported in the media could be just the tip of the iceberg. We have no way of knowing how many vulnerable people are in addictive, toxic or even dangerous relationships with chatbots.

Guardrails and an 'off switch'

When Australia finally introduces mandatory guardrails for high-risk AI systems, which may happen as early as next year, the guardrails should apply to both companion chatbots and the general purpose models the chatbots are built upon.

Guardrails—risk management, testing, monitoring—will be most effective if they get to the human heart of AI hazards. Risks from chatbots are not just technical risks with technical solutions.

Apart from the words a chatbot might use, the context of the product matters, too. In the case of Character.AI, the marketing promises to "empower" people, the interface mimics an ordinary text message exchange with a person, and the platform allows users to select from a range of pre-made characters, which include some problematic personas.

Truly effective AI guardrails should mandate more than just responsible processes, like risk management and testing. They also must demand thoughtful, humane design of interfaces, interactions and relationships between AI systems and their human users.

Even then, guardrails may not be enough. Just like companion chatbots, systems that at first appear to be low risk may cause unanticipated harms.

Regulators should have the power to remove AI systems from the market if they cause harm or pose unacceptable risks. In other words, we don't just need guardrails for high risk AI. We also need an off switch.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Citation: Deaths linked to chatbots show we must urgently revisit what counts as 'high-risk' AI (2024, October 31) retrieved 31 October 2024 from https://techxplore.com/news/2024-10-deaths-linked-chatbots-urgently-revisit.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

New research framework will help AI chatbots better mimic human conversation shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Creating a 3D interactive digital room from simple video
AI

Creating a 3D interactive digital room from simple video

June 30, 2025
0

June 30, 2025 The GIST Creating a 3D interactive digital room from simple video Gaby Clark scientific editor Robert Egan associate editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility: fact-checked trusted...

Read moreDetails
Meta spending big on AI talent but will it pay off?

Meta spending big on AI talent but will it pay off?

June 30, 2025
AI vision language models provide video descriptions for blind users

AI vision language models provide video descriptions for blind users

June 30, 2025
How AI is revolutionizing ATL’s international terminal

How AI is revolutionizing ATL’s international terminal

June 30, 2025
AI is learning to lie, scheme, and threaten its creators

AI is learning to lie, scheme, and threaten its creators

June 29, 2025
China’s humanoid robots generate more soccer excitement than their human counterparts

China’s humanoid robots generate more soccer excitement than their human counterparts

June 29, 2025
Hide and seek: Uncovering new ways to detect vault apps on smartphones

Hide and seek: Uncovering new ways to detect vault apps on smartphones

June 27, 2025

Recent News

A Super Mario Maker 2 player has cleared an astonishing 1 million levels

A Super Mario Maker 2 player has cleared an astonishing 1 million levels

June 30, 2025

Is Bitcoin (BTC) Currently Overpriced or Undervalued? Here’s What Analysts Think

June 30, 2025
NASA will start livestreaming content on Netflix later this summer

NASA will start livestreaming content on Netflix later this summer

June 30, 2025
Creating a 3D interactive digital room from simple video

Creating a 3D interactive digital room from simple video

June 30, 2025

TOP News

  • Apple details new fee structures for App Store payments in the EU

    Apple details new fee structures for App Store payments in the EU

    540 shares
    Share 216 Tweet 135
  • Buying Art from a Gallery. A Guide to Making the Right Choice

    534 shares
    Share 214 Tweet 134
  • Machine learning method for early fault detection could make lithium-ion batteries safer

    534 shares
    Share 214 Tweet 134
  • Bitcoin Bullishness For Q3 Grows: What Happens In Every Post-Halving Year?

    534 shares
    Share 214 Tweet 134
  • New Pokémon Legends: Z-A trailer reveals a completely large model of Lumiose Metropolis

    563 shares
    Share 225 Tweet 141
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved