CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Sunday, June 15, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong

June 14, 2025
155
0

June 14, 2025 feature

The GIST Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong

Related Post

Big tech on a quest for ideal AI device

Big tech on a quest for ideal AI device

June 15, 2025
AI overviews have transformed Google search. Here’s how they work—and how to opt out

AI overviews have transformed Google search. Here’s how they work—and how to opt out

June 14, 2025
Ingrid Fadelli

contributing writer

Gaby Clark

scientific editor

Robert Egan

associate editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

A new metric and a diagnostic benchmark to study the hallucinations of multimodal reasoning models
(a) Example of outputs from a reasoning model and a non-reasoning model on a perception task. Red highlights indicate visual hallucination. Multimodal reasoning models are generally more prone to amplifying hallucinations during the reasoning process compared to their non-reasoning counterparts. (b) Performance of different models on reasoning and perception tasks in the RH-Bench dataset. Better performing models are positioned in the upper right corner. Baseline non-reasoning models of varying scales typically exhibit weaker reasoning capabilities and fewer hallucination, whereas reasoning models display the opposite trend. Credit: Liu et al.

Over the past decades, computer scientists have introduced increasingly sophisticated machine learning-based models, which can perform remarkably well on various tasks. These include multimodal large language models (MLLMs), systems that can process and generate different types of data, predominantly texts, images and videos.

Some of these models, such as OpenAI's GPT4 with Vision (GPT-4V), DeepSeek-R1 and Google Gemini, are now widely used by users worldwide to create specific multi-modal content, including images for social media posts or articles, as well as texts tailored for specific uses.

While the reasoning abilities of these models have improved considerably in recent years, allowing them to solve mathematical and reasoning problems, studies showed that they sometimes respond to things that are not grounded in the input data, for instance, by describing details that do not actually exist in an input image.

These hallucinations have been linked to language priors and internal biases that a model may have acquired during training while it was analyzing large text datasets. These biases can override the visual information fed to the model (i.e., input images), causing the model to incorrectly complete the tasks assigned to it.

Researchers at UC Santa Cruz, Stanford University and UC Santa Barbara have recently developed a metric and a diagnostic benchmark that could help to study these hallucinations, specifically focusing on the relationship between the reasoning of MLLMs and their tendency to hallucinate when asked to describe what is portrayed in an input image. These new research tools, presented in a paper on the arXiv preprint server, could contribute to the assessment and advancement of MLLMs.

"Test-time compute has empowered multimodal large language models to generate extended reasoning chains, yielding strong performance on tasks such as multimodal math reasoning," wrote Chengzhi Liu, Zhongxing Xu and their colleagues in their paper.

"However, this improved reasoning ability often comes with increased hallucination: as generations become longer, models tend to drift away from image-grounded content and rely more heavily on language priors."

A new metric and a diagnostic benchmark to study the hallucinations of multimodal reasoning models
Comparison of reasoning and non-reasoning models on five perception benchmarks. Results are shown for 3B models (left) and 7B models (right). Higher scores indicate lower hallucination. Credit: arXiv (2025). DOI: 10.48550/arxiv.2505.21523

The researchers first assessed the performance of MLLMs on complex reasoning tasks and found that as reasoning chains (i.e., sequences of logical steps required to solve a problem) grew in length, the models' tendency to hallucinate also increased. They suggested that these hallucinations emerged due to reduced attention to visual stimuli and a greater reliance on language priors.

"Attention analysis shows that longer reasoning chains lead to reduced focus on visual inputs, which contributes to hallucination," wrote Liu, Xu and their colleagues.

"To systematically study this phenomenon, we introduce RH-AUC, a metric that quantifies how a model's perception accuracy changes with reasoning length, allowing us to evaluate whether the model preserves visual grounding during reasoning. We also release RH-Bench, a diagnostic benchmark that spans a variety of multimodal tasks, designed to assess the trade-off between reasoning ability and hallucination."

RH-AUC and RH-Bench, the metrics and benchmarks developed by Liu, Xu and his colleagues, could soon be used by other researchers to evaluate the interplay between the reasoning abilities of specific MLLMs and the risk of hallucinating. Moreover, the observations presented in the team's paper could guide future efforts aimed at developing models that can reliably tackle complex reasoning tasks without becoming prone to hallucinations.

"Our analysis reveals that larger models typically achieve a better balance between reasoning and perception and that this balance is influenced more by the types and domains of training data than by its overall volume," wrote Liu, Xu and their colleagues. "These findings underscore the importance of evaluation frameworks that jointly consider both reasoning quality and perceptual fidelity."

Written for you by our author Ingrid Fadelli, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.

More information: Chengzhi Liu et al, More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models, arXiv (2025). DOI: 10.48550/arxiv.2505.21523

Journal information: arXiv

© 2025 Science X Network

Citation: Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong (2025, June 14) retrieved 14 June 2025 from https://techxplore.com/news/2025-06-benchmarking-hallucinations-metric-tracks-multimodal.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Popular AIs head-to-head: OpenAI beats DeepSeek on sentence-level reasoning 0 shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Big tech on a quest for ideal AI device
AI

Big tech on a quest for ideal AI device

June 15, 2025
0

June 15, 2025 The GIST Big tech on a quest for ideal AI device Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility: fact-checked reputable news agency proofread Former...

Read moreDetails
AI overviews have transformed Google search. Here’s how they work—and how to opt out

AI overviews have transformed Google search. Here’s how they work—and how to opt out

June 14, 2025
AI-generated podcasts open new doors to make science accessible

AI-generated podcasts open new doors to make science accessible

June 14, 2025
The most eye-catching products at Paris’s Vivatech trade fair

The most eye-catching products at Paris’s Vivatech trade fair

June 14, 2025
Anthropic says looking to power European tech with hiring push

Anthropic says looking to power European tech with hiring push

June 13, 2025
Vision-language models gain spatial reasoning skills through artificial worlds and 3D scene descriptions

Vision-language models gain spatial reasoning skills through artificial worlds and 3D scene descriptions

June 13, 2025
New ocean mapping technology helps ships cut fuel use and CO₂ emissions

New ocean mapping technology helps ships cut fuel use and CO₂ emissions

June 13, 2025

Recent News

Shiba Inu Price Sell-Off Continues as SHIB Burn Rate Skyrockets to 112,000%

June 15, 2025
Big tech on a quest for ideal AI device

Big tech on a quest for ideal AI device

June 15, 2025
1,600% Shiba Inu (SHIB) Implosion: Here’s What Happened

1,600% Shiba Inu (SHIB) Implosion: Here’s What Happened

June 15, 2025

Investment Management Company Founder Speaks About the Future of Bitcoin (BTC) – What to Expect?

June 15, 2025

TOP News

  • Meta plans stand-alone AI app

    Meta plans stand-alone AI app

    555 shares
    Share 222 Tweet 139
  • Kia’s EV4, its first electrical sedan, will probably be out there within the US later this 12 months

    560 shares
    Share 224 Tweet 140
  • New Pokémon Legends: Z-A trailer reveals a completely large model of Lumiose Metropolis

    560 shares
    Share 224 Tweet 140
  • Lazarus, the brand new anime from the creator of Cowboy Bebop, premieres April 5

    559 shares
    Share 224 Tweet 140
  • Pokémon Champions is all in regards to the battles

    557 shares
    Share 223 Tweet 139
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved