July 30, 2025
The GIST Hiding secret codes in light can protect against fake videos
Gaby Clark
scientific editor
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread

Fact-checkers may have a new tool in the fight against misinformation.
A team of Cornell researchers has developed a way to "watermark" light in videos, which they can use to detect if video is fake or has been manipulated.
The idea is to hide information in nearly-invisible fluctuations of lighting at important events and locations, such as interviews and press conferences or even entire buildings, like the United Nations Headquarters. These fluctuations are designed to go unnoticed by humans, but are recorded as a hidden watermark in any video captured under the special lighting, which could be programmed into computer screens, photography lamps and built-in lighting. Each watermarked light source has a secret code that can be used to check for the corresponding watermark in the video and reveal any malicious editing.
Peter Michael, a graduate student in the field of computer science who led the work, will present the study, "Noise-Coded Illumination for Forensic and Photometric Video," on Aug. 10 at SIGGRAPH 2025 in Vancouver, British Columbia.
Editing video footage in a misleading way is nothing new. But with generative AI and social media, it is faster and easier to spread misinformation than ever before.
"Video used to be treated as a source of truth, but that's no longer an assumption we can make," said Abe Davis, assistant professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science, who first conceived of the idea. "Now you can pretty much create video of whatever you want. That can be fun, but also problematic, because it's only getting harder to tell what's real."
To address these concerns, researchers had previously designed techniques to watermark digital video files directly, with tiny changes to specific pixels that can be used to identify unmanipulated footage or tell if a video was created by AI. However, these approaches depend on the video creator using a specific camera or AI model—a level of compliance that may be unrealistic to expect from potential bad actors.
By embedding the code in the lighting, the new method ensures that any real video of the subject contains the secret watermark, regardless of who captured it. The team showed that programmable light sources, like computer screens and certain types of room lighting, can be coded with a small piece of software, while older lights, like many off-the-shelf lamps, can be coded by attaching a small computer chip about the size of a postage stamp. The program on the chip varies the brightness of the light according to the secret code.

So, what secret information is hidden in these watermarks, and how does it reveal when video is fake? "Each watermark carries a low-fidelity time-stamped version of the unmanipulated video under slightly different lighting. We call these code videos," Davis said. "When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos, which lets us see where changes were made. And if someone tries to generate fake video with AI, the resulting code videos just look like random variations."
Part of the challenge in this work was getting the code to be largely imperceptible to humans. "We used studies from human perception literature to inform our design of the coded light," Michael said. "The code is also designed to look like random variations that already occur in light called 'noise," which also makes it difficult to detect, unless you know the secret code."
If an adversary cuts out footage, such as from an interview or political speech, a forensic analyst with the secret code can see the gaps. And if the adversary adds or replaces objects, the altered parts generally appear black in recovered code videos.
The team has successfully used up to three separate codes for different lights in the same scene. With each additional code, the patterns become more complicated and harder to fake.
"Even if an adversary knows the technique is being used and somehow figures out the codes, their job is still a lot harder," Davis said. "Instead of faking the light for just one video, they have to fake each code video separately, and all those fakes have to agree with each other."
They have also verified that this approach works in some outdoor settings and on people with different skin tones.
Davis and Michael caution, however, that the fight against misinformation is an arms race, and adversaries will continue to devise new ways to deceive.
"This is an important ongoing problem," Davis said. "It's not going to go away, and in fact, it's only going to get harder."
More information: Peter Michael et al, Noise-Coded Illumination for Forensic and Photometric Video Analysis, ACM Transactions on Graphics (2025). DOI: 10.1145/3742892
Journal information: ACM Transactions on Graphics Provided by Cornell University Citation: Hiding secret codes in light can protect against fake videos (2025, July 30) retrieved 30 July 2025 from https://techxplore.com/news/2025-07-secret-codes-fake-videos.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
AI system identifies fake videos beyond face swaps and altered speech 13 shares
Feedback to editors