New method to block AI learning from your online content

August 12, 2025

The GIST New method to block AI learning from your online content

Lisa Lock

scientific editor

Andrew Zinin

lead editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

New method to block AI learning from your online content
Credit: CSIRO

A new technique developed by Australian researchers could stop unauthorized artificial intelligence (AI) systems learning from photos, artwork and other image-based content.

Developed by CSIRO, Australia's national science agency, in partnership with the Cyber Security Cooperative Research Center (CSCRC) and the University of Chicago, the method subtly alters content to make it unreadable to AI models while remaining unchanged to the human eye.

The breakthrough could help artists, organizations and social media users protect their work and personal data from being used to train AI systems or create deepfakes. For example, a social media user could automatically apply a protective layer to their photos before posting, preventing AI systems from learning facial features for deepfake creation. Similarly, defense organizations could shield sensitive satellite imagery or cyber threat data from being absorbed into AI models.

The technique sets a limit on what an AI system can learn from protected content. It provides a mathematical guarantee that this protection holds, even against adaptive attacks or retraining attempts.

Dr. Derui Wang, CSIRO scientist, said the technique offers a new level of certainty for anyone uploading content online.

"Existing methods rely on trial and error or assumptions about how AI models behave," Dr. Wang said. "Our approach is different; we can mathematically guarantee that unauthorized machine learning models can't learn from the content beyond a certain threshold. That's a powerful safeguard for social media users, content creators, and organizations."

Dr. Wang said the technique could be applied automatically at scale.

"A social media platform or website could embed this protective layer into every image uploaded," he said. "This could curb the rise of deepfakes, reduce intellectual property theft, and help users retain control over their content."

While the method is currently applicable to images, there are plans to expand it to text, music, and videos.

The method is still theoretical, with results validated in a controlled lab setting. The code is available on GitHub for academic use, and the team is seeking research partners from sectors including AI safety and ethics, defense, cybersecurity, academia, and more.

The paper, "Provably Unlearnable Data Examples," was presented at the Network and Distributed System Security Symposium (NDSS 2025), where it received the Distinguished Paper Award.

More information: Provably Unlearnable Data Examples. www.ndss-symposium.org/ndss-pa … nable-data-examples/

Provided by CSIRO Citation: New method to block AI learning from your online content (2025, August 12) retrieved 12 August 2025 from https://techxplore.com/news/2025-08-method-block-ai-online-content.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Fraud detection strategies outlined may explain how to survive explosion of deepfakes shares

Feedback to editors