CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Saturday, June 14, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Explainable AI: New framework increases transparency in decision-making systems

June 13, 2025
158
0

June 13, 2025

The GIST Explainable AI: New framework increases transparency in decision-making systems

Related Post

Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong

Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong

June 14, 2025
AI-generated podcasts open new doors to make science accessible

AI-generated podcasts open new doors to make science accessible

June 14, 2025
Gaby Clark

scientific editor

Robert Egan

associate editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

Explainable AI: New framework increases transparency in decision-making systems
In high-stakes situations like medical diagnostics, understanding why an AI model made a decision is as important as the decision itself. A new framework called Constrained Concept Refinement offers accurate, explainable predictions with low computational cost. Credit: ChatGPT image prompted by Salar Fattahi.

A new explainable AI technique transparently classifies images without compromising accuracy. The method, developed at the University of Michigan, opens up AI for situations where understanding why a decision was made is just as important as the decision itself, like medical diagnostics.

If an AI model flags a tumor as malignant without specifying what prompted the result—like size, shape or a shadow in the image—doctors cannot verify the result or explain it to the patient. Worse, the model may have picked up on misleading patterns in the data that humans would recognize as irrelevant.

"We need AI systems we can trust, especially in high-stakes areas like health care. If we don't understand how a model makes decisions, we can't safely rely on it. I want to help build AI that's not only accurate, but also transparent and easy to interpret," said Salar Fattahi, an assistant professor of industrial and operations engineering at U-M and senior author of the study to be presented the afternoon of July 17 at the International Conference on Machine Learning in Vancouver, British Columbia.

When classifying an image, AI models associate vectors of numbers with specific concepts. These number sets, called concept embeddings, can help AI locate things like "fracture," "arthritis" or "healthy bone" in an X-ray. Explainable AI works to make concept embeddings interpretable—meaning a person can understand what the numbers represent and how they influence the model's decisions.

Previous explainable AI methods add interpretability features after the model is already built. While these approaches can identify key factors that influenced model predictions, they counterintuitively are not explainable themselves. These models also treat concept embeddings as fixed numerical vectors, ignoring potential errors or misrepresentations inherent in them.

For instance, these models embed the concept of "healthy bone" using a pretrained multimodal model such as CLIP. Unlike carefully curated datasets, CLIP is trained on large-scale, noisy image-text pairs scraped from the internet. These pairs often include mislabeled data, vague descriptions or biologically incorrect associations, leading to inconsistencies in the resulting embeddings.

Published on the arXiv preprint server, the new framework—Constrained Concept Refinement or CCR—addresses the first problem by embedding and optimizing interpretability directly into the model's architecture. It solves the second by introducing flexibility in concept embeddings, allowing them to adapt to the specific task at hand.

Explainable AI: New framework increases transparency in decision-making systems
The red arrows represent the backpropagation training process for classic explainable AI models. This paper extends the training process to refine concept embeddings with constraints on their deviation from initial embeddings, represented by green arrows and box. Credit: arXiv (2025). DOI: 10.48550/arxiv.2502.06775

Users can toggle the framework to favor interpretability, with more concept embedding restrictions, or accuracy by allowing concept embeddings to stray a bit more. This added flexibility allows the potentially inaccurate concept embedding of "healthy bone"—as obtained from CLIP—to be automatically adjusted and corrected by adapting to the available data. By leveraging this additional flexibility, the CCR approach can enhance both the interpretability and accuracy of the model.

"What surprised me most was realizing that interpretability doesn't have to come at the cost of accuracy. In fact, with the right approach, it's possible to achieve both—clear, explainable decisions and strong performance—in a simple and effective way," said Fattahi.

CCR outperformed two explainable methods (CLIP-IP-OMP and label-free CBM) in prediction accuracy while preserving interpretability when tested on three image classification benchmarks (CIFAR10/100, Image Net, Places365). Importantly, the new method reduced runtime tenfold, offering better performance with lower computational cost.

"Although our current experiments focus on image classification, the method's low implementation cost and ease of tuning suggest strong potential for broader applicability across diverse machine learning domains," said Geyu Liang, a doctoral graduate of industrial and operations engineering at U-M and lead author of the study.

For instance, AI is increasingly integrated into who qualifies for loans, but without explainability, applicants are left in the dark when rejected. Explainable AI can increase transparency and fairness in finance, ensuring a decision was based on specific factors like income or credit history rather than biased or unrelated information.

"We've only scratched the surface. What excites me most is that our work offers strong evidence that explainability can be brought into modern AI in a surprisingly efficient and low-cost way," said Fattahi.

More information: Geyu Liang et al, Enhancing Performance of Explainable AI Models with Constrained Concept Refinement, arXiv (2025). DOI: 10.48550/arxiv.2502.06775

Journal information: arXiv Provided by University of Michigan College of Engineering Citation: Explainable AI: New framework increases transparency in decision-making systems (2025, June 13) retrieved 13 June 2025 from https://techxplore.com/news/2025-06-ai-framework-transparency-decision.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Advanced AI model can accelerate therapeutic gene target discovery 9 shares

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong
AI

Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong

June 14, 2025
0

June 14, 2025 feature The GIST Benchmarking hallucinations: New metric tracks where multimodal reasoning models go wrong Ingrid Fadelli contributing writer Gaby Clark scientific editor Robert Egan associate editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes...

Read moreDetails
AI-generated podcasts open new doors to make science accessible

AI-generated podcasts open new doors to make science accessible

June 14, 2025
The most eye-catching products at Paris’s Vivatech trade fair

The most eye-catching products at Paris’s Vivatech trade fair

June 14, 2025
Anthropic says looking to power European tech with hiring push

Anthropic says looking to power European tech with hiring push

June 13, 2025
Vision-language models gain spatial reasoning skills through artificial worlds and 3D scene descriptions

Vision-language models gain spatial reasoning skills through artificial worlds and 3D scene descriptions

June 13, 2025
New ocean mapping technology helps ships cut fuel use and CO₂ emissions

New ocean mapping technology helps ships cut fuel use and CO₂ emissions

June 13, 2025
Rethinking AI: Researchers propose a more effective, human-like approach

Rethinking AI: Researchers propose a more effective, human-like approach

June 13, 2025

Recent News

Brazil Sets Flat 17.5% Tax on Crypto Profits, Ending Exemption for Smaller Investors

June 14, 2025
Apple will repair some Mac minis powered by M2 chips for free

Apple will repair some Mac minis powered by M2 chips for free

June 14, 2025

Why Are So Many Public Companies Pivoting to Crypto, And What Happens If Bitcoin Crashes?

June 14, 2025
Playdate Season 2 review: Long Puppy and Otto’s Galactic Groove!!

Playdate Season 2 review: Long Puppy and Otto’s Galactic Groove!!

June 14, 2025

TOP News

  • Meta plans stand-alone AI app

    Meta plans stand-alone AI app

    555 shares
    Share 222 Tweet 139
  • Kia’s EV4, its first electrical sedan, will probably be out there within the US later this 12 months

    560 shares
    Share 224 Tweet 140
  • New Pokémon Legends: Z-A trailer reveals a completely large model of Lumiose Metropolis

    560 shares
    Share 224 Tweet 140
  • Lazarus, the brand new anime from the creator of Cowboy Bebop, premieres April 5

    559 shares
    Share 224 Tweet 140
  • Pokémon Champions is all in regards to the battles

    557 shares
    Share 223 Tweet 139
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved