October 1, 2025
The GIST Reader survey shows AI-driven misinformation found to lower trust, but raise engagement with trustworthy news sources
Gaby Clark
scientific editor
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
proofread

Concerns over the prevalence of online misinformation, including "fake news," and its implications for politics, business, and society at large have gained momentum in the last decade. The rise of social media, with its almost complete lack of barriers to disseminating content, has contributed to markedly diminished trust in the news, as have new developments in artificial intelligence (AI), especially generative AI (GenAI). All of these changes have implications for political polarization as well as the economic viability of the news industry.
In a new study, researchers examined the interplay among AI-powered misinformation, trust, and the media ecosystem, using a field experiment conducted by a major German newspaper, Süddeutsche Zeitung (SZ). They found that while AI-driven misinformation may lower trust, it also boosts engagement with trustworthy news sources.
The study, conducted by researchers at Carnegie Mellon University, Johns Hopkins University, National University of Singapore, and Süddeutsche Zeitung Digitale Medien, is published as a working paper.
"The media industry has struggled financially since the rise of the Internet in the 2000s," notes Ananya Sen, associate professor of information technology and management at Carnegie Mellon's Heinz College, who coauthored the study. "For business models that rely on producing high-quality news content, if it becomes impossible to distinguish real from false content, producing the news could become economically unsustainable."
SZ is a major German newspaper with a daily paid circulation of more than 260,000 and 295,000 online subscribers. By reputation and quality, it is similar to The New York Times in the United States and The Guardian in the United Kingdom.
SZ regularly conducts surveys of online subscribers, digital app users, and website visitors. In this study, conducted in early 2025, researchers examined 17,000 people for whom SZ was considered a very trustworthy news source. Readers were randomly assigned to two groups: The first group was shown three pairs of real and AI-generated photos related to current affairs and asked to judge whether either was generated by AI. The second group was shown pairs of real images related to the same set of issues and asked questions unrelated to AI.
Next, readers in both groups were asked to take a quiz to evaluate the severity of misinformation and to rate their level of trust in SZ and other media outlets and platforms. Over the following weeks, researchers tracked more than 6,000 respondents' online behavior as it related to SZ, with the users' permission.
For respondents in the first group, having information highlighting the challenge of distinguishing real from AI-generated images affected post-survey browsing behavior: Daily visits to SZ digital content rose 3% in the first 3–5 days, with the effect declining over time but still significant after two weeks, and respondents had higher levels of information retention. These effects were stronger for respondents who found the quiz difficult and for those with lower levels of prior interest in politics.
The study also found that respondents in the first group were less likely to drop their subscriptions than were respondents in the second group, despite having learned more about AI-generated images. Moreover, subscribers' retention rates rose 1.1% after five months, corresponding to about a one-third decline in the rate of attrition.
These findings suggest a possible business strategy for the news industry in response to the challenge posed by AI-generated content, the authors say. From a broader societal perspective, they provide a nuanced counterpoint to concerns over AI (and misinformation more broadly) leading to a downward spiral in trust in the information environment: Increased scarcity creates increased potential rewards for trustworthiness.
But it is not enough for purveyors of the news to retain a given level of trustworthiness. Media outlets must ensure that their ability to help readers distinguish real from AI content evolves at least as fast as the difficulty of the task.
"The deterioration in the information environment that has come from the emergence of a technology like GenAI leads to reduced trust in the information environment as a whole," explains Felipe Campante at Süddeutsche Zeitung Digitale Medien, who led the study.
"However, a news outlet that is perceived as sufficiently trustworthy may nevertheless witness increased demand as a result because its relative value goes up in the eyes of readers who deem it trustworthy enough to mitigate the effects of the misinformation technology."
More information: Filipe Campante et al, GenAI Misinformation, Trust, and News Consumption: Evidence from a Field Experiment, (2025). DOI: 10.3386/w34100
Provided by Carnegie Mellon University's Heinz College Citation: Reader survey shows AI-driven misinformation found to lower trust, but raise engagement with trustworthy news sources (2025, October 1) retrieved 1 October 2025 from https://techxplore.com/news/2025-10-reader-survey-ai-driven-misinformation.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Reading news on social media for two weeks improves knowledge and fake news recognition, study finds
Feedback to editors
