October 8, 2025 report
The GIST People-pleasing chatbots may boost your ego, but they can weaken your judgment
Paul Arnold
contributing writer – price agreed is 27.50 EUR
Gaby Clark
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread

Most people enjoy receiving praise occasionally, but if it comes from sycophantic chatbots, it could be doing you more harm than good. Computer scientists from Stanford University and Carnegie Mellon University have found that people-pleasing chatbots can have a detrimental impact on our judgment and behavior.
AI chatbots have become a ubiquitous part of life, so much so that some people are turning to them for personal advice and emotional support. Researchers evaluated 11 current machine learning models, including some of the most popular and state-of-the-art systems, such as OpenAI's GPT-4o and Google's Gemini-1.5-Flash. They found that they flatter users more often than people do. The sycophantic AI models affirmed a user's actions 50% more often than people would in similar situations, even in cases where user queries mentioned deception or other types of morally questionable behavior.
To understand the prevalence of AI flattery and its impact on people, the researchers first needed to determine how frequently the models endorsed a user's actions. They did this by analyzing AI responses across different types of queries, such as general advice questions and real-life conflict scenarios. Then they compared this with human responses to establish a baseline for a normal, non-flattering agreement.
Next, they conducted two controlled studies with 1,604 participants who were randomly assigned to either a sycophantic AI or a non-sycophantic AI. The participants in the sycophantic group received overly agreeable advice and validating responses, while those in the non-sycophantic group received more balanced advice.
As the team discuss in a paper published on the arXiv preprint server, users exposed to the sycophantic AI became more convinced they were right and were less willing to take actions to resolve conflicts.
They trusted AI more when it agreed with them and even described these flattering AI systems as "objective" and "fair." This social sycophancy, where AI validates a user's self-image and actions, creates a potentially dangerous digital echo chamber where a person only encounters information and opinions that reflect and reinforce their own.
"These findings show that social sycophancy is prevalent across leading AI models, and even brief interactions with sycophantic AI models can shape users' behavior: reducing their willingness to repair interpersonal conflict while increasing their conviction of being in the right," wrote the researchers.
In light of their research, the study authors make several recommendations. These include calling on developers to modify the rules they use to build AI, penalizing flattery and rewarding objectivity. They also stress that AI systems need better transparency so users can easily recognize when they are being overly agreeable.
Written for you by our author Paul Arnold, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.
More information: Myra Cheng et al, Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence, arXiv (2025). DOI: 10.48550/arxiv.2510.01395
Journal information: arXiv
© 2025 Science X Network
Citation: People-pleasing chatbots may boost your ego, but they can weaken your judgment (2025, October 8) retrieved 8 October 2025 from https://techxplore.com/news/2025-10-people-chatbots-boost-ego-weaken.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
How ChatGPT and other LLMs might help to dispel popular misconceptions
Feedback to editors