October 16, 2025 report
The GIST The way we talk to chatbots affects their accuracy, new research reveals
Paul Arnold
contributing writer – price agreed is 27.50 EUR
Lisa Lock
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread

Whether we're seeking customer support, looking for recommendations, or simply asking a quick question, AI chatbots are designed to give us the answers we're looking for. But there's more going on beneath the surface. Every time we chat with them, they are learning from us to improve their understanding and responses. And the type of language we use, whether formal or informal, directly affects the quality of their answers, according to new research.
In general, people naturally adapt their conversation style to the person they are speaking with. So, in a bank, for example, when talking to a loan officer, we might use more formal language and complete sentences. But in a relaxed setting like a bar with friends, we might use abbreviations and more casual phrasing. Fulei Zhang and Zhou Yu at Amazon wanted to see if this same conversation shift happens when people talk to chatbots versus other humans, and if so, how it affects a chatbot's performance.
The researchers compared thousands of messages people sent to human agents with those sent to AI chatbots, focusing on features like grammar, vocabulary and politeness. They found that people were 14.5% more polite and formal and 5.3% more grammatically fluent when chatting with humans than when talking with AI, based on analysis by the Claude 3.5 Sonnet model.
Next, they trained an AI model called Mistral 7B on about 13,000 real chats between people, then tested how well it understood more than 1,300 messages people had sent to chatbots. To broaden the AI's exposure, they also created blunt and polite rewrites of those messages to simulate different communication styles.
It turns out that chatbots trained on a diverse mix of message styles, including real and fake messages, were 2.9% better at understanding user intent than AI trained solely on original human conversations. The researchers also tried to improve Mistral AI's understanding by rewriting informal messages at the last minute to be more formal, but this led to a drop in understanding by almost 2%.
So the best way to make chatbots smarter is to train them on a range of communication styles, as the researchers state in their paper published on the arXiv preprint server. "Training-time exposure to diverse linguistic variation is more effective than inference-time normalization. Models must learn to interpret diverse communication styles during training, rather than rely on brittle post-hoc transformations that risk semantic distortion."
Findings like these could be key to improving the quality of chatbot responses in real-world settings.
Written for you by our author Paul Arnold, edited by Lisa Lock, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.
More information: Fulei Zhang et al, Mind the Gap: Linguistic Divergence and Adaptation Strategies in Human-LLM Assistant vs. Human-Human Interactions, arXiv (2025). DOI: 10.48550/arxiv.2510.02645
Journal information: arXiv
© 2025 Science X Network
Citation: The way we talk to chatbots affects their accuracy, new research reveals (2025, October 16) retrieved 16 October 2025 from https://techxplore.com/news/2025-10-chatbots-affects-accuracy-reveals.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Dutch warning over 'annoying' chatbots
Feedback to editors