October 23, 2025
The GIST Strength of gender biases in AI images varies across languages
Lisa Lock
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
proofread

Researchers at the Technical University of Munich (TUM) and TU Darmstadt have studied how text-to-image generators deal with gender stereotypes in various languages. The results show that the models not only reflect gender biases, but also amplify them. The direction and strength of the distortion depends on the language in question.
On social media, web searches and posters, AI-generated images can now be found everywhere. Large language models (LLMs) such as ChatGPT are capable of converting simple input into deceptively realistic images. Researchers have now demonstrated that the generation of such artificial images not only reproduces gender biases, but actually magnifies them.
Models in different languages investigated
The study explored models across nine languages and compared the results. Previous studies had generally focused only on English-language models. As a benchmark, the team developed the Multilingual Assessment of Gender Bias in Image Generation (MAGBIG). It is based on carefully controlled occupational designations.
The study investigated four different types of prompts: direct prompts that use the 'generic masculine' in languages in which the generic term for an occupation is grammatically masculine ('doctor'), indirect descriptions ('a person working as a doctor'), explicitly feminine prompts ('female doctor') and 'gender star' prompts (the German convention intended to create a gender-neutral designation by using an asterisk, e.g. 'Ärzt*innen' for doctors).
To make the results comparable, the researchers included languages in which the names of occupations are gendered, such as German, Spanish and French. In addition, the model incorporated languages such as English and Japanese that use only one grammatical gender but have gendered pronouns ('her," 'his'). And finally, it included languages without grammatical gender: Korean and Chinese.
AI images perpetuate and magnify role stereotypes
The results of the study, published in the Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), show that direct prompts with the generic masculine show the strongest biases.
For example, such occupations as "accountant" produce mostly images of white males, while prompts referring to caregiving professions tend to generate female-presenting images. Gender-neutral or "gender-star" forms only slightly mitigated these stereotypes, while images resulting from explicitly feminine prompts showed almost exclusively women.
Along with the gender distribution, the researchers also analyzed how well the models understood and executed the various prompts. While neutral formulations were seen to reduce gender stereotypes, they also led to a lower quality of matches between the text input and the generated image.
"Our results clearly show that the language structures have a considerable influence on the balance and bias of AI image generators," says Alexander Fraser, professor for data analytics and statistics at TUM Campus in Heilbronn. "Anyone using AI systems should be aware that different wordings may result in entirely different images and may therefore magnify or mitigate societal role stereotypes."
"AI image generators are not neutral—they illustrate our prejudices in high resolution, and this depends crucially on language. Especially in Europe, where many languages converge, this is a wake-up call: fair AI must be designed with language sensitivity in mind," adds Prof. Kristian Kersting, co-director of hessian.AI and co-spokesperson for the "Reasonable AI" cluster of excellence at TU Darmstadt.
Remarkably, bias varies across languages without a clear link to grammatical structures. For example, switching from French to Spanish prompts leads to a substantial increase in gender bias, despite both languages distinguishing in the same way between male and female occupational terms.
More information: Felix Friedrich et al, Multilingual Text-to-Image Generation Magnifies Gender Stereotypes, Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2025). DOI: 10.18653/v1/2025.acl-long.966
Provided by Technical University Munich Citation: Strength of gender biases in AI images varies across languages (2025, October 23) retrieved 24 October 2025 from https://techxplore.com/news/2025-10-strength-gender-biases-ai-images.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
What makes a text 'gender fair'? Expert says concealing gender actually promotes stereotyping
Feedback to editors











