October 29, 2025 feature
The GIST Literary character approach helps LLMs simulate more human-like personalities
Ingrid Fadelli
contributing writer
Gaby Clark
scientific editor
Robert Egan
associate editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread

After the advent of ChatGPT, the use of large language models (LLMs) has become increasingly widespread worldwide. LLMs are artificial intelligence (AI) systems trained on large sets of written texts, which can rapidly process queries in various languages and generate responses that sometimes appear to be written by humans.
As these systems become increasingly advanced, they could be used to realize virtual characters that simulate human personalities and behaviors. In addition, several researchers are now conducting psychology and behavioral science research involving LLMs, for instance, testing their performance on specific tasks and comparing it to that of humans.
Researchers at Hebei Petroleum University of Technology and Beijing Institute of Technology recently carried out a study aimed at assessing the ability of LLMs to simulate human personality traits and behaviors. Their paper, published on the arXiv preprint server, introduces a new framework to assess the consistency and realism of constructed identities (i.e., personas) or characters expressed by LLMs, while also reporting several important findings—including the discovery of a scaling law governing persona realism.
"Using LLMs to drive social simulations is clearly a major research frontier," Tianyu Huang, co-author of the paper, told Tech Xplore. "Compared with controlled experiments in natural sciences, social experiments are costly—sometimes even historically costly for humankind. Even for much smaller-scale domains like business or public policy, the potential applications are vast.
"From the perspective of LLM research itself, these models already exhibit impressive mathematical and logical abilities. Some studies even suggest that they internalize temporal and spatial concepts. Whether LLMs can further infer human attributes and thus engage with the humanities represents another major question."

A key challenge in the emulation of human-like traits and abilities using LLMs is the systematic bias often exhibited by existing models. Most earlier works tried to tackle this problem case by case, for instance by adjusting identifiable biases in training datasets or individual outputs produced by models. In contrast, Huang and his colleagues set out to develop a general framework that would address the root causes of LLM biases.
"First, we point out a methodological misconception in the current literature, namely that many researchers directly apply psychometric validity testing methods developed for humans to assess LLMs' personality simulation," explained Yuqi Bai, co-author of the paper. "We argue this is a categorical mismatch. Our approach steps back to a broader view—focusing not on isolated validity metrics but on the overall patterns."
As part of their study, the researchers tried to determine if the statistical characteristics of the personalities simulated by LLMs converged with the patterns observed in humans. Rather than trying to pin-point the characteristics that LLM and human personalities currently have in common, the team hoped to outline a path or a set of variables that would lead to the gradual convergence of AI and human personalities.
"Our study went through a period of deep confusion," said Bai. "Using LLM-generated persona profiles initially led to strong systematic biases, and prompt engineering showed limited effect—just as others had found. Progress stalled. Then, during a team discussion, we realized that when LLMs generate persona profiles, they often behave as if writing a résumé—highlighting positive traits and suppressing negatives."
Eventually, Huang, Bai and their colleagues decided to assess the personalities that LLMs would convey in novels. As fictional literary works are often effective in capturing the complexity of human emotions and behavior, they asked LLMs to write their own novels.

"This became our third population-level experiment, and the results were remarkable, as the systematic bias was drastically reduced," said Bai. "Later experiments using Wikipedia literary characters showed simulated personality distributions converging much closer to human data. The conclusion was clear: detail and realism can overcome systematic bias."
The findings gathered by these researchers suggest that LLMs can partially emulate human personality traits. Moreover, these models' ability to simulate realistic personas was found to improve when they were provided with richer and more detailed descriptions of the 'virtual character' they were expected to be.
"Our main contribution is identifying persona detail level as the key variable determining the effectiveness of LLM-driven social simulations," explained Kun Sun, co-author of the paper.
"From an application perspective, social platforms and LLM API providers already possess massive, detail-rich user profile data—forming a powerful foundation for social simulation. This presents both tremendous commercial potential and serious ethical and privacy concerns. Preventing manipulative control and safeguarding human autonomy are therefore critical challenges."
In the future, this recent study could inform the development of conversational AI agents or virtual characters that realistically simulate specific personas. In addition, it could inspire research exploring the risks of AI-simulated personas and introduce methods to limit or detect the unethical use of LLM-based virtual characters.
Meanwhile, the team plans to further investigate the scaling law guiding the LLM simulation of human personalities. For instance, they would like to train models on richer persona datasets or employ more sophisticated data management tools.
"We also plan to explore whether similar scaling phenomena appear in other human-like traits such as values," added Sun and Yuting Chen. "Use linear regression-based probing techniques to examine whether LLMs have internalized prior distributions about human attributes within their latent representations. Understanding this implicit world model may reveal the underlying mechanism behind human traits simulation."
Written for you by our author Ingrid Fadelli, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.
More information: Yuqi Bai et al, Scaling Law in LLM Simulated Personality: More Detailed and Realistic Persona Profile Is All You Need, arXiv (2025). DOI: 10.48550/arxiv.2510.11734
Journal information: arXiv
© 2025 Science X Network
Citation: Literary character approach helps LLMs simulate more human-like personalities (2025, October 29) retrieved 29 October 2025 from https://techxplore.com/news/2025-10-literary-character-approach-llms-simulate.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Anthropic says they've found a new way to stop AI from turning evil
Feedback to editors









