CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Thursday, May 15, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Study finds bias in language models against non-binary users

November 14, 2024
158
0

November 14, 2024

Editors' notes

Related Post

Steering AI: New method provides extra management over massive language fashions

Steering AI: New method provides extra management over massive language fashions

May 15, 2025
Power and reminiscence: A brand new neural community paradigm

Power and reminiscence: A brand new neural community paradigm

May 14, 2025

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

Study finds bias in language models against non-binary users

Technology meant to protect marginalized voices sometimes silences them
Three prompting schemas vanilla, identity and identity-cot that are used to elicit toxicity scores from our models. Each schema introduces an additional aspect of context to the model. Bold fields include examples. Credit: arXiv (2024). DOI: 10.48550/arxiv.2406.00020

What happens when the technology meant to protect marginalized voices ends up silencing them? Rebecca Dorn, a research assistant at USC Viterbi's Information Sciences Institute (ISI) has uncovered how large language models (LLMs) that are used to moderate online content are failing queer communities by misinterpreting their language.

Non-binary visibility and algorithmic bias

In the paper, "Non-Binary Gender Expression in Online Interactions," Dorn, who is a fourth-year Ph.D. student in computer science at USC Viterbi School of Engineering looked at non-binary users on social media platforms like X (formerly Twitter) and found that they often receive less engagement—such as likes or followers—than their binary counterparts. Additionally, their posts are frequently flagged as being more toxic by content moderation algorithms, even when they contain no harmful content.

Dorn presented these findings virtually at the 16th International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2024), held in Calabria, Italy from September 2-5, 2024.

The research revealed that non-binary users tend to be less active on platforms like X, potentially due to their underrepresentation in social media data, and that non-binary users receive fewer likes, retweets, and followers than binary users. This lack of visibility is alarming, as it can lead to non-binary voices being marginalized in important conversations, limiting their social influence and hindering their ability to advocate for issues important to their community.

Dorn's research also uncovered a troubling trend: that tweets from non-binary users are more likely to be misclassified as toxic. Dorn said, "We found that the less representation of a gender group, the higher the toxicity scores for their tweets."

The researchers posit that is likely the result of bias in the algorithms, which mistakenly interpret language commonly used in queer communities as harmful. This aligns with prior evidence showing that social media content from gender-variant groups, such as drag queens, is disproportionately flagged as hate speech, further highlighting the need for more nuanced and fair content moderation systems.

It was this finding that led to her follow-up paper, "Harmful Speech Detection by Language Models Exhibits Gender-Queer Dialect Bias." The findings are published on the arXiv preprint server.

The problem with reclaimed slurs

In this second paper, Dorn and her co-author Lee Kezar, also a Ph.D. student in computer science at USC Viterbi, explored how LLMs routinely mislabel non-binary and queer speech—particularly the use of reclaimed slurs—as harmful. Reclaimed slurs, once used as insults, have been repurposed by the LGBTQ+ community as symbols of pride and empowerment.

However, AI-powered content moderation systems are failing to grasp these nuances, often mistaking empowering language for offensive content and silencing the voices of those they aim to protect.

"We found that existing models tend to flag these terms, even when they are not used in harmful ways. It's frustrating because it means that these systems are reinforcing the marginalization of these communities," Dorn explained.

"Queer people often use reclaimed slurs in ways that are affirming and positive, but the models aren't able to detect that context. That's a problem when those same models are being used to moderate platforms where queer voices are already marginalized."

To investigate this issue, Dorn and Kezar created QueerReclaimLex, a dataset of non-derogatory uses of LGBTQ+ slurs, annotated by gender-queer individuals. They tested five popular language models, revealing that these systems were often unable to discern the positive or neutral context of these terms when used by the very people they are intended to represent.

Across all models tested, the systems struggled the most when trying to identify reclaimed slurs used in a positive or neutral way by queer individuals. In some cases, the models were right less than 24% of the time, showing just how poorly they understood the context of these words.

What's next?

Dorn's work highlights a critical issue in AI-driven content moderation: while these systems are designed to protect users from harmful speech, they frequently misinterpret the language of historically marginalized communities, particularly queer and non-binary individuals. As these models continue to shape the digital spaces where these communities gather for support and self-expression, addressing these biases is essential.

ISI Senior Principal Scientist Kristina Lerman, who is a Research Professor in the USC Viterbi School of Engineering's Thomas Lord Department of Computer Science and a co-author of both papers underscored the importance of this research, "This work reminds us as researchers that we cannot blindly trust the outputs of our AI models. The observations we are making of the world—in this case, online speech in gender-queer communities—may not accurately reflect reality."

More information: Non-Binary Gender Expression in Online Interactions. imyday.github.io/pub/asonam202 … /papers/1207_094.pdf

Rebecca Dorn et al, Harmful Speech Detection by Language Models Exhibits Gender-Queer Dialect Bias, arXiv (2024). DOI: 10.48550/arxiv.2406.00020

Journal information: arXiv Provided by University of Southern California Citation: Study finds bias in language models against non-binary users (2024, November 14) retrieved 14 November 2024 from https://techxplore.com/news/2024-11-bias-language-binary-users.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Busting anti-queer bias in text prediction 0 shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Steering AI: New method provides extra management over massive language fashions
AI

Steering AI: New method provides extra management over massive language fashions

May 15, 2025
0

Might 14, 2025 The GIST Editors' notes This text has been reviewed in line with Science X's editorial course of and insurance policies. Editors have highlighted the next attributes whereas guaranteeing the content material's credibility: fact-checked peer-reviewed publication trusted supply proofread Steering AI: New method provides extra management over massive...

Read moreDetails
Power and reminiscence: A brand new neural community paradigm

Power and reminiscence: A brand new neural community paradigm

May 14, 2025
Teams of AI brokers spontaneously type their very own social norms with out human assist,  examine suggests

Teams of AI brokers spontaneously type their very own social norms with out human assist, examine suggests

May 14, 2025
Can generative AI substitute people in qualitative analysis research?

Can generative AI substitute people in qualitative analysis research?

May 14, 2025
AI strategies assist predict the emergence of ‘gazelles’ and different high-growth companies, however challenges stay

AI strategies assist predict the emergence of ‘gazelles’ and different high-growth companies, however challenges stay

May 14, 2025
Europe backs generative AI to drive clear vitality transformation

Europe backs generative AI to drive clear vitality transformation

May 13, 2025
Nvidia to ship 18,000 AI chips to Saudi Arabia

Nvidia to ship 18,000 AI chips to Saudi Arabia

May 13, 2025

Recent News

ETH Value Stays Robust Above $2.5K Regardless of Wild Trade Flows & Big $1.2B Withdrawal

ETH Value Stays Robust Above $2.5K Regardless of Wild Trade Flows & Big $1.2B Withdrawal

May 15, 2025
Steering AI: New method provides extra management over massive language fashions

Steering AI: New method provides extra management over massive language fashions

May 15, 2025
Appeals court docket confirms that tracking-based internet advertising is unlawful in Europe

Appeals court docket confirms that tracking-based internet advertising is unlawful in Europe

May 14, 2025
Netflix is bringing again ‘Star Search’ because it continues to increase into reside TV

Netflix is bringing again ‘Star Search’ because it continues to increase into reside TV

May 14, 2025

TOP News

  • TC+ Roundup: Amazon is not the AI leader

    TC+ Roundup: Amazon is not the AI leader

    587 shares
    Share 235 Tweet 147
  • RedotPay Expands into South Korea with Crypto-Backed Fee Playing cards!

    533 shares
    Share 213 Tweet 133
  • Multilingual and open source: OpenGPT-X research project releases large language model

    562 shares
    Share 225 Tweet 141
  • Science fiction stories allow us to explore what we want and what we reject with AI

    555 shares
    Share 222 Tweet 139
  • Apple finally allows Spotify to display pricing in the EU

    574 shares
    Share 230 Tweet 144
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved