CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Monday, July 7, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Unmasking hidden online hate: A new tool helps catch nasty comments—even when they’re disguised

November 28, 2024
159
0

November 28, 2024

Editors' notes

Related Post

LLMs display different cultural tendencies when responding to queries in English and Chinese, study finds

LLMs display different cultural tendencies when responding to queries in English and Chinese, study finds

July 7, 2025
Approach improves how new skills are taught to large language models

Approach improves how new skills are taught to large language models

July 7, 2025

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

written by researcher(s)

proofread

Unmasking hidden online hate: A new tool helps catch nasty comments—even when they're disguised

cursing
Credit: Pixabay/CC0 Public Domain

People determined to spread toxic messages online have taken to masking their words to bypass automated moderation filters.

A user might replace letters with numbers or symbols, for example, writing "Y0u're st00pid" instead of "You're stupid."

Another tactic involves combining words, such as "IdiotFace." Doing this masks the harmful intent from systems that look for individual toxic words.

Similarly, harmful terms can be altered with spaces or additional characters, such as "h a t e " or "h@te," effectively slipping through keyword-based filters.

While the intent remains harmful, traditional moderation tools often overlook such messages. This leaves users—particularly vulnerable groups—exposed to their negative impact.

To address this, we have developed a novel pre-processing technique designed to help moderation tools more effectively handle the subtle complexities of hidden toxicity.

An intelligent assistant

Our tool works in conjunction with existing moderation. It acts as an intelligent assistant, preparing content for deeper and more accurate evaluation by restructuring and refining input text.

By addressing common tricks users employ to disguise harmful intent, it ensures moderation systems are more effective. The tool performs three key functions.

  1. It first simplifies the text. Irrelevant elements, such as excessive punctuation or extraneous characters, are removed to make text straightforward and ready for evaluation.
  2. It then standardizes what is written. Variations in spelling, phrasing and grammar are resolved. This includes interpreting deliberate misspellings ("h8te" for "hate").
  3. Finally, it looks for patterns. Recurring strategies such as breaking up toxic words ("I d i o t"), or embedding them within benign phrases, are identified and normalized to reveal the underlying intent.

These steps can break apart compound words like "IdiotFace" or normalize modified phrases like "Y0u're st00pid." This makes harmful content visible to traditional filters.

Importantly, our work is not about reinventing the wheel but ensuring the existing wheel functions as effectively as it should, even when faced with disguised toxic messages.

Catching subtle forms of toxicity

The applications of this tool extend across a wide range of online environments. For social media platforms, it enhances the ability to detect harmful messages, creating a safer space for users. This is particularly important for protecting younger audiences, who may be more vulnerable to online abuse.

By catching subtle forms of toxicity, the tool helps to prevent harmful behaviors like bullying from persisting unchecked.

Businesses can also use this technology to safeguard their online presence. Negative campaigns or covert attacks on brands often employ subtle and disguised messaging to avoid detection. By processing such content before it is moderated, the tool ensures that businesses can respond swiftly to any reputational threats.

Additionally, policymakers and organizations that monitor public discourse can benefit from this system. Hidden toxicity, particularly in polarized discussions, can undermine efforts to maintain constructive dialogue.

The tool provides a more robust way for identifying problematic content and ensuring that debates remain respectful and productive.

Better moderation

Our tool marks an important advance in content moderation. By addressing the limitations of traditional keyword-based filters, it offers a practical solution to the persistent issue of hidden toxicity.

Importantly, it demonstrates how small but focused improvements can make a big difference in creating safer and more inclusive online environments. As digital communication continues to evolve, tools like ours will play an increasingly vital role in protecting users and fostering positive interactions.

While this research addresses the challenges of detecting hidden toxicity within text, the journey is far from over.

Future advances will likely delve deeper into the complexities of context—analyzing how meaning shifts depending on conversational dynamics, cultural nuances and intent.

By building on this foundation, the next generation of content moderation systems could uncover not just what is being said but also the circumstances in which it is said, paving the way for safer and more inclusive online spaces.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Citation: Unmasking hidden online hate: A new tool helps catch nasty comments—even when they're disguised (2024, November 28) retrieved 28 November 2024 from https://techxplore.com/news/2024-11-unmasking-hidden-online-tool-nasty.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Study finds bias in language models against non-binary users shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

LLMs display different cultural tendencies when responding to queries in English and Chinese, study finds
AI

LLMs display different cultural tendencies when responding to queries in English and Chinese, study finds

July 7, 2025
0

July 7, 2025 feature The GIST LLMs display different cultural tendencies when responding to queries in English and Chinese, study finds Ingrid Fadelli contributing writer Sadie Harley scientific editor Robert Egan associate editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have...

Read moreDetails
Approach improves how new skills are taught to large language models

Approach improves how new skills are taught to large language models

July 7, 2025
From position to meaning: How AI learns to read

From position to meaning: How AI learns to read

July 7, 2025
AI robots fill in for weed killers and farm hands

AI robots fill in for weed killers and farm hands

July 6, 2025
Chatbots are on the rise, but customers still trust human agents more

Chatbots are on the rise, but customers still trust human agents more

July 5, 2025
AI designs new underwater gliders with shapes inspired by marine animals

AI designs new underwater gliders with shapes inspired by marine animals

July 4, 2025
Open-source engine enables high-performance data processing for Internet of Things devices

Open-source engine enables high-performance data processing for Internet of Things devices

July 4, 2025

Recent News

Arkane founder calls Game Pass an ‘unsustainable model’ that’s wrecking the industry

Arkane founder calls Game Pass an ‘unsustainable model’ that’s wrecking the industry

July 7, 2025
Prime Day deal: Get $100 off the Apple Watch Series 10

Prime Day deal: Get $100 off the Apple Watch Series 10

July 7, 2025

ICP Rebounds From Intraday Lows as Support at $4.80 Holds Firm

July 7, 2025
Amazon’s Echo Spot is on sale for only $45 for Prime Day

Amazon’s Echo Spot is on sale for only $45 for Prime Day

July 7, 2025

TOP News

  • Top 5 Tokenized Real Estate Platforms Transforming Property Investment

    Top 5 Tokenized Real Estate Platforms Transforming Property Investment

    536 shares
    Share 214 Tweet 134
  • AI-driven personalized pricing may not help consumers

    535 shares
    Share 214 Tweet 134
  • Our favorite power bank for iPhones is 20 percent off right now

    535 shares
    Share 214 Tweet 134
  • God help us, Donald Trump plans to sell a phone

    536 shares
    Share 214 Tweet 134
  • Investment Giant 21Shares Announces New Five Altcoins Including Avalanche (AVAX)!

    535 shares
    Share 214 Tweet 134
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved