July 30, 2025 report
The GIST AI can evolve to feel guilt—but only in certain social environments
Krystal Kasal
contributing writer
Lisa Lock
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread

Guilt is a highly advantageous quality for society as a whole. It might not prevent initial wrongdoings, but guilt allows humans to judge their own prior judgments as harmful and prevents them from happening again. The internal distress caused by feelings of guilt often—but not always—results in the person taking on some kind of penance to relieve themselves from internal turmoil. This might be something as simple as admitting their wrongdoing to others and taking on a slight stigma of someone who is morally corrupt. This upfront cost might be initially painful, but can relieve further guilt and lead to better cooperation for the group in the future.
As we interact more and more with artificial intelligence and use it in almost every aspect of our modern society, finding ways to instill ethical decision-making becomes more critical. In a recent study, published in the Journal of the Royal Society Interface, researchers used game theory to explore how and when guilt evolves in multi-agent systems.
The researchers used the "prisoners' dilemma"—a game where two players must choose between cooperating and defecting. Defecting provides an agent with a higher payoff, but they must betray their partner. This, in turn, makes it more likely that the partner will also defect. However, if the game is repeated over and over, cooperation results in a better payoff for both agents.
Initially, though, this change from defecting to cooperation requires a "feeling" of guilt and penance in the form of reduced points. This iterated version was used in the study among well-mixed, and lattice (homogeneous), and scale-free (heterogeneous) network structures.
The researchers also distinguished between two different types of guilt: social guilt, which requires awareness of others' states, and non-social guilt, which is self-focused and does not require social awareness.
Ultimately, the results showed that both social and non-social guilt evolved and persisted in the more structured populations, where strategies involving guilt dominated and led to higher levels of cooperation. The non-social guilt was more vulnerable to exploitation by individuals who did not feel guilt, but survived by clustering with similar emotionally-inclined strategies. The well-mixed populations, on the other hand, experienced much less cooperation. Social guilt evolved only when the social cost was sufficiently small, but non-social guilt did not evolve at all.
A common thread seemed to be that the agents found it more advantageous to atone for their actions (by losing points and cooperating instead of defecting) only when their partner also exhibited guilt, which was more likely to occur in structured social networks. And so, assessing their partner's actions was an important part of the process.
The study authors write, "Cooperation does not emerge when agents only alleviate their own guilt (i.e., non-social guilt), without considering their co-players' own attitudes about the alleviation of guilt as well. That is the case where guilt-prone agents are easily dominated by agents who do not express guilt or who have no motivation to alleviate their own guilt. Hence, only when the tendency to alleviate guilt is mutual (i.e., social guilt) can cooperation thrive."
While real-life social networks may be more complex than those in the simulations, this study does provide some valuable insight into how guilt and ethical cooperation might be integrated into AI systems.
Written for you by our author Krystal Kasal, edited by Lisa Lock, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.
More information: Theodor Cimpeanu et al, The evolutionary advantage of guilt: co-evolution of social and non-social guilt in structured populations, Journal of the Royal Society Interface (2025). DOI: 10.1098/rsif.2025.0164. royalsocietypublishing.org/doi … .1098/rsif.2025.0164
Journal information: Journal of the Royal Society Interface
© 2025 Science X Network
Citation: AI can evolve to feel guilt—but only in certain social environments (2025, July 30) retrieved 30 July 2025 from https://techxplore.com/news/2025-07-ai-evolve-guilt-social-environments.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Racial-minority business owners can benefit from 'white guilt,' marketing study finds 11 shares
Feedback to editors