ChatGPT only talks in cliches—here’s why that’s a threat to human creativity

September 2, 2025

The GIST ChatGPT only talks in cliches—here's why that's a threat to human creativity

Gaby Clark

scientific editor

Robert Egan

associate editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

written by researcher(s)

proofread

creative chat
Credit: Tim Douglas from Pexels

When you chat with ChatGPT, it often feels like you're talking to someone polite, engaged and responsive. It nods in all the right places, mirrors your wording and seems eager to keep the exchange flowing.

But is this really what human conversation sounds like? Our new study shows that while ChatGPT plausibly imitates dialog, it does so in a way that is stereotypical rather than unique.

Every conversation has quirks. When two family members talk on the phone, they don't just exchange information—they reuse each other's words, rework them creatively, interrupt, disagree, joke, banter or wander off-topic.

They do so because human talk is naturally fragmented, but also to enact their own identities in interaction. These moments of "conversational uniqueness" are what make real dialog unpredictable and deeply human.

We wanted to contrast human conversation with AI ones. So we compared 240 phone conversations between Chinese family members with dialogs simulated by ChatGPT under the same contextual conditions, using a statistical model to measure patterns across hundreds of turns.

To capture human uniqueness in our study, we mainly focused on three levels of human interaction. One was "dialogic resonance." That's to do with re-using each other's expressions. For example, when speaker A says "You never call me," speaker B may respond "You are the one who never calls."

Another factor we included was "recombinant creativity." This involves inventing new twists on what's just been said by an interlocutor. For example, speaker A may ask "All good?", to which speaker B responds "All smashing." Here the structure is kept constant but the adjective is creatively substituted in a way that is unique to the exchange.

A final feature we included was "relevance acknowledgment": showing interest and recognition of the other's point, such as "It's interesting what you said, in fact …" or "That's a good point …".

What we found

ChatGPT did remarkably well—even too well—at showing engagement. It often echoed and acknowledged the other speaker even more than humans do. But it fell short in two decisive ways.

First, the lexical diversity was much lower for ChatGPT than for human speakers. Where people varied their words and expressions, AI recycled the same ones.

Most importantly, we spotted a lot of stereotypical speech in the AI-generated conversations. When it simulated giving advice or making requests, ChatGPT defaulted to predictable parental-style recommendations such as "Take care of your health" and "Don't worry too much."

This was unlike real human parents who mixed in clarifications, refusals, jokes, sarcasm and even impolite expressions at times. In our data, a far more human way of showing concern for a daughter's health at college was often through making implications rather than direct instructions—for example, a mother asking, "Why in the world are you juggling two jobs?" with the implied meaning that she will burn out if she keeps being this busy.

In short, ChatGPT statistically flattened human dialogs in the context of our inquiry, replacing them with a polished, plausible but ultimately rather dry template.

Why this matters

At first glance, ChatGPT's consistency feels like a strength. It makes the system reliable and predictable. Yet these very qualities also make it less human. Real people avoid sounding repetitive. They resist clichés. They build conversations that are recognizably theirs.

This is what defines unique identities in interaction—how we want to be perceived by others. There are words, expressions and intonations you would never use, not necessarily because they are impolite, but because they do not represent who you are or how you want to sound to others.

Being accused of being "boring" is definitely something most people try to avoid; it's effectively what brings about American playboy Dickie Greenleaf's death in the famous Patricia Highsmith novel, The Talented Mr. Ripley, when he says it of his friend, Tom Ripley. The conversational choices we make are not simply appropriate ways to talk, but strategies for locating ourselves in society and constructing our singular identity with every conversation.

This gap matters in all sorts of ways. If AI cannot capture the uniqueness of human interaction, it risks reinforcing stereotypes of how people ought to speak, rather than reflecting how they actually do. More troubling still, it may promote a new procedural ideology of conversation—one where talk is reduced to sounding engaged yet remains uncreative; a functional but impoverished tool of cooperation.

Our findings suggest that AI is remarkably good at modeling the normative patterns of dialog—the things people say often and conventionally. But it struggles with the idiosyncratic and unexpected, which are essential for creativity, humor and authentic human conversation.

The danger is not only that AI sounds nothing but plausible. It is that humans, over time, may begin to imitate its style in a way that AI's stereotyped behavior may start to reshape conversational norms.

In the long run, we may find ourselves "learning" from AI how to converse—gradually erasing creativity and uniqueness from our own speech. Conversation, at its core, is not just about efficiency. It is about co-creating meaning and social identities through innovation and extravagance, even more than we realize.

What might be at stake, then, assuming AI can't overcome this problem, is not simply whether it can converse like humans—but whether humans will continue to converse like themselves.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Citation: ChatGPT only talks in cliches—here's why that's a threat to human creativity (2025, September 2) retrieved 2 September 2025 from https://techxplore.com/news/2025-09-chatgpt-cliches-threat-human-creativity.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Neurocomputational study sheds light on how the brain organizes conversational content 1 shares

Feedback to editors