August 5, 2025
The GIST Protection from AI crawlers eludes visual artists despite available tools, study shows
Gaby Clark
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread

Visual artists want to protect their work from non-consensual use by generative AI tools such as ChatGPT. But most of them do not have the technical know-how or control over the tools needed to do so.
One of the best ways to protect artists' creative work is to prevent it from ever being seen by "AI crawlers"—the programs that harvest data on the internet for training generative models. But most artists don't have access to the tools that would allow them to take such actions. And when they do have access, they don't know how to use them.
These are some of the conclusions of a study by a group of researchers at the University of California San Diego and University of Chicago, which will be presented at the 2025 Internet Measurement Conference in October in Madison, Wis.
The study is published on the arXiv preprint server.
"At the core of the conflict in this paper is the notion that content creators now wish to control how their content is used, not simply if it is accessible. While such rights are typically explicit in copyright law, they are not readily expressible, let alone enforceable in today's internet.
"Instead, a series of ad hoc controls have emerged based on repurposing existing web norms and firewall capabilities, none of which match the specificity, usability, or level of enforcement that is, in fact, desired by content creators," the researchers write.
The research team surveyed over 200 visual artists about the demand for tools to block AI crawlers, as well as the artists' technical expertise. Researchers also reviewed more than 1,100 professional artist websites to see how much control artists had over AI-blocking tools. Finally, the team evaluated which processes were the most effective at blocking AI crawlers.

Currently, artists can fairly easily use some tools that mask original artworks from AI crawlers by turning the art into something different. The study's co-authors at the University of Chicago developed one of these tools, known as Glaze.
But ideally, artists would be able to keep AI crawlers from harvesting their data altogether. To do so, visual artists need to defend themselves against three categories of AI crawlers. One type harvests data to train the large language models that power chatbots, another to increase the knowledge of AI-backed assistants, and yet another to support AI-backed search engines.
Artist survey
There has been extensive media coverage of how generative AI has severely disrupted the livelihoods of many artists. As a result, close to 80% of the 203 visual artists the researchers surveyed said they have tried to take proactive steps to keep their artwork from being included in training data for AI generating tools. Two-thirds reported using Glaze. In addition, 60% of artists have cut back on the amount of work they share online, and 51% of them share only low-resolution images of their work.
Also, 96% of artists said they would like to have access to a tool that can deter AI crawlers from harvesting their data. But more than 60% of them were not familiar with one of the simplest tools that can do this: robots.txt.
Tools for deterring AI crawlers
Robots.txt is a simple text file placed in the root directory of a website that spells out which pages crawlers are allowed to access on that website. The text file can also spell out which crawlers are not allowed to have access to the website at all. But the crawlers have no obligation to follow these restrictions.

Researchers surveyed the top 100,000 most popular websites on the internet and found that more than 10% have explicitly disallowed AI crawlers in their robots.txt files. But some sites, including Vox Media and The Atlantic, removed this prohibition after entering into licensing agreements with AI companies.
Indeed, the number of sites allowing AI crawlers is increasing, including popular right-wing misinformation sites. Researchers hypothesize that these sites might seek to spread misinformation to LLMs.
One issue for artists is that they do not have access to or control of the relevant robots.txt file. That's because, in a survey of 1100 artist websites, researchers found that more than three quarters are hosted on third-party service platforms, most of which do not allow for modifications of robots.txt.
Many of these content management systems artists use also give them little to no information about what type of crawling is blocked. Squarespace is the only company that provides a simple interface for blocking AI tools. But researchers found that only 17% of artists who use Squarespace enable this option. This might be because often, artists are not aware that this service is available.
But do crawlers respect the prohibitions listed in robots.txt, even though they are not mandatory?
The answer is mixed. Crawlers from big corporations generally do respect robots.txt, both in claim and in practice. The only crawler that researchers could clearly determine does not is Bytespider, deployed by TikTok owner ByteDance. In addition, a large number of crawlers claim they respect robots.txt restrictions but researchers were not able to verify that this is actually the case.
All in all, "the majority of AI crawlers operated by big companies do respect robots.txt, while the majority of AI assistant crawlers do not," the researchers write.

More recently, network provider Cloudflare has launched a "block AI bots" feature. At this point, only 5.7% of the sites using Cloudflare have this option enabled. But researchers hope it will become more popular over time.
"While it is an 'encouraging new option', we hope that providers become more transparent with the operation and coverage of their tools (for example by providing the list of AI bots that are blocked)," said Elisa Luo, one of the paper's authors and a Ph.D. student in Savage's research group.
Legislative and legal uncertainties
The global landscape around AI crawlers is constantly changing due to different legal changes and a wide range of legislative proposals.
In the United States, AI companies face legal challenges around the extent to which copyright applies to models trained on data scraped from the internet and what their obligations might be to the creators of this content. In the European Union, a recently passed AI Act requires providers of AI models to get authorization from copyright holders to use their data.
"There is reason to believe that confusion around the availability of legal remedies will only further focus attention on technical access controls," the researchers write. "To the extent that any U.S. court finds an affirmative 'fair use' defense for AI model builders, this weakening of remedies on use will inevitably create an even stronger demand to enforce controls on access."
More information: Enze Liu et al, Somesite I Used To Crawl: Awareness, Agency and Efficacy in Protecting Content Creators From AI Crawlers, arXiv (2024). DOI: 10.48550/arxiv.2411.15091
Journal information: arXiv Provided by University of California – San Diego Citation: Protection from AI crawlers eludes visual artists despite available tools, study shows (2025, August 5) retrieved 5 August 2025 from https://techxplore.com/news/2025-08-ai-crawlers-eludes-visual-artists.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Study reveals need for better documentation of web crawlers 0 shares
Feedback to editors