CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Thursday, July 3, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

New tool uses vision language models to safeguard against offensive image content

July 10, 2024
154
0

July 10, 2024

Editors' notes

Related Post

Distrust in AI is on the rise—but along with healthy skepticism comes the risk of harm

Distrust in AI is on the rise—but along with healthy skepticism comes the risk of harm

July 3, 2025
How AI is improving accounting efficiency—without replacing jobs

How AI is improving accounting efficiency—without replacing jobs

July 2, 2025

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

New tool uses vision language models to safeguard against offensive image content

Researchers develop safeguard against offensive image content
LlavaGuard judges images for safety alignment with a policy providing a safety rating, category, and rationale. Credit: arXiv (2024). DOI: 10.48550/arxiv.2406.05113

Researchers at the Artificial Intelligence and Machine Learning Lab (AIML) in the Department of Computer Science at TU Darmstadt and the Hessian Center for Artificial Intelligence (hessian.AI) have developed a method that uses vision language models to filter, evaluate, and suppress specific image content in large datasets or from image generators.

Artificial intelligence (AI) can be used to identify objects in images and videos. This computer vision can also be used to analyze large corpora of visual data.

Researchers led by Felix Friedrich from the AIML have developed a method called LlavaGuard, which can now be used to filter certain image content. This tool uses so-called vision language models (VLMs). In contrast to large language models (LLMs) such as ChatGPT, which can only process text, vision language models are able to process and understand image and text content simultaneously. The work is published on the arXiv preprint server.

LlavaGuard can also fulfill complex requirements, as it is characterized by its ability to adapt to different legal regulations and user requirements. For example, the tool can differentiate between regions in which activities such as cannabis consumption are legal or illegal. LlavaGuard can also assess whether content is appropriate for certain age groups and restrict or adapt it accordingly.

"Until now, such fine-grained safety tools have only been available for analyzing texts. When filtering images, only the 'nudity' category has previously been implemented, but not others such as 'violence,' 'self-harm' or 'drug abuse,'" says Friedrich.

LlavaGuard not only flags problematic content, but also provides detailed explanations of its safety ratings by categorizing content (e.g., "hate," "illegal substances," "violence," etc.) and explaining why it is classified as safe or unsafe.

"This transparency is what makes our tool so special and is crucial for understanding and trust," explains Friedrich. It makes LlavaGuard an invaluable tool for researchers, developers and political decision-makers.

The research on LlavaGuard is an integral part of the Reasonable Artificial Intelligence (RAI) cluster project at TU Darmstadt and demonstrates the university's commitment to advancing safe and ethical AI technologies. LlavaGuard was developed to increase the safety of large generative models by filtering training data and explaining and justifying the output of problematic motives, thereby reducing the risk of generating harmful or inappropriate content.

The potential applications of LlavaGuard are far-reaching. Although the tool is currently still under development and focused on research, it can already be integrated into image generators such as Stable Diffusion to minimize the production of unsafe content.

In addition, LlavaGuard could also be adapted for use on social media platforms in the future to protect users by filtering out inappropriate images and thus promoting a safer online environment.

More information: Lukas Helff et al, LLavaGuard: VLM-based Safeguards for Vision Dataset Curation and Safety Assessment, arXiv (2024). DOI: 10.48550/arxiv.2406.05113

Journal information: arXiv Provided by Technische Universitat Darmstadt Citation: New tool uses vision language models to safeguard against offensive image content (2024, July 10) retrieved 10 July 2024 from https://techxplore.com/news/2024-07-tool-vision-language-safeguard-offensive.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Researcher develops filter to tackle 'unsafe' AI-generated images shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Distrust in AI is on the rise—but along with healthy skepticism comes the risk of harm
AI

Distrust in AI is on the rise—but along with healthy skepticism comes the risk of harm

July 3, 2025
0

July 2, 2025 The GIST Distrust in AI is on the rise—but along with healthy skepticism comes the risk of harm Gaby Clark scientific editor Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes...

Read moreDetails
How AI is improving accounting efficiency—without replacing jobs

How AI is improving accounting efficiency—without replacing jobs

July 2, 2025
Striking parallels between biological brains and AI during social interaction suggest fundamental principles

Striking parallels between biological brains and AI during social interaction suggest fundamental principles

July 2, 2025
Centaur: AI that thinks like us—and could help explain how we think

Centaur: AI that thinks like us—and could help explain how we think

July 2, 2025
AI might now be as good as humans at detecting emotion, political leaning and sarcasm in online conversations

AI might now be as good as humans at detecting emotion, political leaning and sarcasm in online conversations

July 2, 2025
As companies embrace AI, these leaders offer tips to make it better

As companies embrace AI, these leaders offer tips to make it better

July 2, 2025
Miami firm partners with JPMorgan Chase to bring AI-powered strategic consulting to small businesses

Miami firm partners with JPMorgan Chase to bring AI-powered strategic consulting to small businesses

July 2, 2025

Recent News

Uber drivers in British Columbia, Canada have unionized

Uber drivers in British Columbia, Canada have unionized

July 3, 2025
This Roomba combo robot vacuum and mop is nearly half off for Prime Day

This Roomba combo robot vacuum and mop is nearly half off for Prime Day

July 3, 2025

IMF Rejects Pakistan’s Bid to Subsidise Power for Crypto Mining

July 3, 2025
Tesla deliveries drop 14 percent amid Musk backlash

Tesla deliveries drop 14 percent amid Musk backlash

July 3, 2025

TOP News

  • Apple details new fee structures for App Store payments in the EU

    Apple details new fee structures for App Store payments in the EU

    540 shares
    Share 216 Tweet 135
  • Top 5 Tokenized Real Estate Platforms Transforming Property Investment

    536 shares
    Share 214 Tweet 134
  • Bitcoin Bullishness For Q3 Grows: What Happens In Every Post-Halving Year?

    534 shares
    Share 214 Tweet 134
  • Machine learning methods are best suited to catch liars, according to science of deception detection

    533 shares
    Share 213 Tweet 133
  • Buying Art from a Gallery. A Guide to Making the Right Choice

    534 shares
    Share 214 Tweet 134
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved