October 15, 2025
The GIST AI systems and humans 'see' the world differently—and that's why AI images look so garish
Gaby Clark
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
written by researcher(s)
proofread

How do computers see the world? It's not quite the same way humans do.
Recent advances in generative artificial intelligence (AI) make it possible to do more things with computer image processing. You might ask an AI tool to describe an image, for example, or to create an image from a description you provide.
As generative AI tools and services become more embedded in day-to-day life, knowing more about how computer vision compares to human vision is becoming essential.
My latest research, published in Visual Communication, uses AI-generated descriptions and images to get a sense of how AI models "see"—and discovered a bright, sensational world of generic images quite different from the human visual realm.
Comparing human and computer vision
Humans see when light waves enter our eyes through the iris, cornea and lens. Light is converted into electrical signals by a light-sensitive surface called the retina inside the eyeball, and then our brains interpret these signals into images we see.
Our vision focuses on key aspects such as color, shape, movement and depth. Our eyes let us detect changes in the environment and identify potential threats and hazards.

Computers work very differently. They process images by standardizing them, inferring the context of an image through metadata (such as time and location information in an image file), and comparing images to other images they have previously learned about. Computers focus on things such as edges, corners or textures present in the image. They also look for patterns and try to classify objects.
You've likely helped computers learn how to "see" by completing online CAPTCHA tests.
These are typically used to help computers differentiate between humans and bots. But they're also used to train and improve machine learning algorithms.
So, when you're asked to "select all the images with a bus", you're helping software learn the difference between different types of vehicles as well as proving you're human.
Exploring how computers 'see' differently
In my new research, I asked a large language model to describe two visually distinct sets of human-created images.
One set contained hand-drawn illustrations while the other was made up of camera-produced photographs.

I fed the descriptions back into an AI tool and asked it to visualize what it had described. I then compared the original human-made images to the computer-generated ones.
The resulting descriptions noted the hand-drawn images were illustrations but didn't mention the other images as being photographs or having a high level of realism. This suggests AI tools see photorealism as the default visual style, unless specifically prompted otherwise.
Cultural context was largely devoid from the descriptions. The AI tool either couldn't or wouldn't infer cultural context by the presence of, for example, Arabic or Hebrew writing in the images. This underscores the dominance of some languages, like English, in AI tools' training data.
While color is vital to human vision, it too was largely ignored in the AI tools' image descriptions. Visual depth and perspective were also largely ignored.
The AI images were more boxy than the hand-drawn illustrations, which used more organic shapes.
The AI images were also much more saturated than the source images: they contained brighter, more vivid colors. This reveals the prevalence of stock photos, which tend to be more "contrasty", in AI tools' training data.
The AI images were also more sensationalist. A single car in the original image became one of a long column of cars in the AI version. AI seems to exaggerate details not just in text but also in visual form.
The generic nature of the AI images means they can be used in many contexts and across countries. But the lack of specificity also means audiences might perceive them as less authentic and engaging.

Deciding when to use human or computer vision
This research supports the notion that humans and computers "see" differently. Knowing when to rely on computer or human vision to describe or create images can be a competitive advantage.
While AI-generated images can be eye-catching, they can also come across as hollow upon closer inspection. This can limit their value.
Images are adept at sparking an emotional reaction and audiences might find human-created images that authentically reflect specific conditions as more engaging than computer-generated attempts.
However, the capabilities of AI can make it an attractive option for quickly labeling large data sets and helping humans categorize them.
Ultimately, there's a role for both human and AI vision. Knowing more about the opportunities and limits of each can help keep you safer, more productive, and better equipped to communicate in the digital age.
Journal information: Visual Communication Provided by The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation: AI systems and humans 'see' the world differently—and that's why AI images look so garish (2025, October 15) retrieved 15 October 2025 from https://techxplore.com/news/2025-10-ai-humans-world-differently-images.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
AI method reconstructs 3D scene details from simulated images using inverse rendering
Feedback to editors