Explainable AI can improve deepfake detection transparency

February 6, 2025

The GIST Editors' notes

This text has been reviewed based on Science X's editorial course of and insurance policies. Editors have highlighted the next attributes whereas guaranteeing the content material's credibility:

fact-checked

proofread

Explainable AI can improve deepfake detection transparency

The use of explainable artificial intelligence to recognize deepfakes
Explainable synthetic intelligence (XAI) method. Credit score: Utilized Sciences (2025). DOI: 10.3390/app15020725

A brand new research by SRH College emphasizes the advantages of explainable AI techniques for the dependable and clear detection of deepfakes. AI choices will be introduced in a understandable manner by means of function analyses and visualizations, thus selling belief in AI applied sciences.

A analysis workforce led by Prof Dr. Alexander I. Iliev from SRH College, with key contributions by the researcher Nazneen Mansoor, has developed an progressive methodology for detecting deepfakes. Within the research just lately revealed within the journal Utilized Sciences, the scientists current the usage of explainable synthetic intelligence (Explainable AI) to extend transparency and reliability within the identification of manipulated media content material.

Deepfakes, i.e., faux media content material similar to movies or audio recordsdata created utilizing synthetic intelligence, pose an rising risk to society as they can be utilized to unfold misinformation and undermine public belief. Standard detection strategies typically attain their limits, particularly in terms of making the decision-making processes of AI fashions understandable.

In its research, the SRH College workforce carried out intensive assessments by which completely different AI fashions have been examined for his or her capacity to reliably determine deepfakes. Specific consideration was paid to explainable AI, which makes it potential to current the premise for the fashions' choices in a clear and understandable method.

That is finished, for instance, utilizing visualization strategies similar to "warmth maps," which spotlight in coloration which picture areas the AI has recognized as related for its choice. As well as, the explainable fashions analyze particular options similar to textures or motion patterns that point out manipulation.

Prof Dr. Iliev, Head of the Laptop Science—Massive Knowledge & Synthetic Intelligence Grasp's program, explains the significance of those approaches: "Our goal was to create applied sciences that aren’t solely efficient, but additionally reliable. The flexibility to make the decision-making technique of AI clear is changing into more and more vital—be it in regulation enforcement, the media trade or in science."

The research reveals that explainable AI not solely improves recognition accuracy, but additionally promotes understanding and belief in AI applied sciences. By exhibiting how the selections have been made, weaknesses within the fashions will be recognized and future techniques will be optimized in a focused method. It is a essential step in strengthening the accountable use of AI in society.

Extra data: Nazneen Mansoor et al, Explainable AI for DeepFake Detection, Utilized Sciences (2025). DOI: 10.3390/app15020725

Supplied by SRH College Quotation: Explainable AI can improve deepfake detection transparency (2025, February 6) retrieved 6 February 2025 from https://techxplore.com/information/2025-02-ai-deepfake-transparency.html This doc is topic to copyright. Aside from any truthful dealing for the aim of personal research or analysis, no half could also be reproduced with out the written permission. The content material is offered for data functions solely.

Discover additional

Explainable AI strategies can enhance the trustworthiness of wind energy forecasts shares

Feedback to editors