CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Friday, November 7, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Literary character approach helps LLMs simulate more human-like personalities

October 29, 2025
158
0

October 29, 2025 feature

The GIST Literary character approach helps LLMs simulate more human-like personalities

Related Post

AI tech can compress LLM chatbot conversation memory by 3–4 times

AI tech can compress LLM chatbot conversation memory by 3–4 times

November 7, 2025
Magnetic materials discovered by AI could reduce rare earth dependence

Magnetic materials discovered by AI could reduce rare earth dependence

November 7, 2025
Ingrid Fadelli

contributing writer

Gaby Clark

scientific editor

Robert Egan

associate editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

Study explores the extent to which LLMs can simulate human-like personalities
Figure outlining the team's evaluation methodology aimed at catching the convergence of simulated personalities toward human personalities. Credit: Bai et al.

After the advent of ChatGPT, the use of large language models (LLMs) has become increasingly widespread worldwide. LLMs are artificial intelligence (AI) systems trained on large sets of written texts, which can rapidly process queries in various languages and generate responses that sometimes appear to be written by humans.

As these systems become increasingly advanced, they could be used to realize virtual characters that simulate human personalities and behaviors. In addition, several researchers are now conducting psychology and behavioral science research involving LLMs, for instance, testing their performance on specific tasks and comparing it to that of humans.

Researchers at Hebei Petroleum University of Technology and Beijing Institute of Technology recently carried out a study aimed at assessing the ability of LLMs to simulate human personality traits and behaviors. Their paper, published on the arXiv preprint server, introduces a new framework to assess the consistency and realism of constructed identities (i.e., personas) or characters expressed by LLMs, while also reporting several important findings—including the discovery of a scaling law governing persona realism.

"Using LLMs to drive social simulations is clearly a major research frontier," Tianyu Huang, co-author of the paper, told Tech Xplore. "Compared with controlled experiments in natural sciences, social experiments are costly—sometimes even historically costly for humankind. Even for much smaller-scale domains like business or public policy, the potential applications are vast.

"From the perspective of LLM research itself, these models already exhibit impressive mathematical and logical abilities. Some studies even suggest that they internalize temporal and spatial concepts. Whether LLMs can further infer human attributes and thus engage with the humanities represents another major question."

Study explores the extent to which LLMs can simulate human-like personalities
Figure outlining marginal density in the convergence of simulated personalities toward human personalities. Credit: Bai et al.

A key challenge in the emulation of human-like traits and abilities using LLMs is the systematic bias often exhibited by existing models. Most earlier works tried to tackle this problem case by case, for instance by adjusting identifiable biases in training datasets or individual outputs produced by models. In contrast, Huang and his colleagues set out to develop a general framework that would address the root causes of LLM biases.

"First, we point out a methodological misconception in the current literature, namely that many researchers directly apply psychometric validity testing methods developed for humans to assess LLMs' personality simulation," explained Yuqi Bai, co-author of the paper. "We argue this is a categorical mismatch. Our approach steps back to a broader view—focusing not on isolated validity metrics but on the overall patterns."

As part of their study, the researchers tried to determine if the statistical characteristics of the personalities simulated by LLMs converged with the patterns observed in humans. Rather than trying to pin-point the characteristics that LLM and human personalities currently have in common, the team hoped to outline a path or a set of variables that would lead to the gradual convergence of AI and human personalities.

"Our study went through a period of deep confusion," said Bai. "Using LLM-generated persona profiles initially led to strong systematic biases, and prompt engineering showed limited effect—just as others had found. Progress stalled. Then, during a team discussion, we realized that when LLMs generate persona profiles, they often behave as if writing a résumé—highlighting positive traits and suppressing negatives."

Eventually, Huang, Bai and their colleagues decided to assess the personalities that LLMs would convey in novels. As fictional literary works are often effective in capturing the complexity of human emotions and behavior, they asked LLMs to write their own novels.

Study explores the extent to which LLMs can simulate human-like personalities
Figure outlining the age curve in the convergence of simulated personalities toward human personalities. Credit: Bai et al.

"This became our third population-level experiment, and the results were remarkable, as the systematic bias was drastically reduced," said Bai. "Later experiments using Wikipedia literary characters showed simulated personality distributions converging much closer to human data. The conclusion was clear: detail and realism can overcome systematic bias."

The findings gathered by these researchers suggest that LLMs can partially emulate human personality traits. Moreover, these models' ability to simulate realistic personas was found to improve when they were provided with richer and more detailed descriptions of the 'virtual character' they were expected to be.

"Our main contribution is identifying persona detail level as the key variable determining the effectiveness of LLM-driven social simulations," explained Kun Sun, co-author of the paper.

"From an application perspective, social platforms and LLM API providers already possess massive, detail-rich user profile data—forming a powerful foundation for social simulation. This presents both tremendous commercial potential and serious ethical and privacy concerns. Preventing manipulative control and safeguarding human autonomy are therefore critical challenges."

In the future, this recent study could inform the development of conversational AI agents or virtual characters that realistically simulate specific personas. In addition, it could inspire research exploring the risks of AI-simulated personas and introduce methods to limit or detect the unethical use of LLM-based virtual characters.

Meanwhile, the team plans to further investigate the scaling law guiding the LLM simulation of human personalities. For instance, they would like to train models on richer persona datasets or employ more sophisticated data management tools.

"We also plan to explore whether similar scaling phenomena appear in other human-like traits such as values," added Sun and Yuting Chen. "Use linear regression-based probing techniques to examine whether LLMs have internalized prior distributions about human attributes within their latent representations. Understanding this implicit world model may reveal the underlying mechanism behind human traits simulation."

Written for you by our author Ingrid Fadelli, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.

More information: Yuqi Bai et al, Scaling Law in LLM Simulated Personality: More Detailed and Realistic Persona Profile Is All You Need, arXiv (2025). DOI: 10.48550/arxiv.2510.11734

Journal information: arXiv

© 2025 Science X Network

Citation: Literary character approach helps LLMs simulate more human-like personalities (2025, October 29) retrieved 29 October 2025 from https://techxplore.com/news/2025-10-literary-character-approach-llms-simulate.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Anthropic says they've found a new way to stop AI from turning evil

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

AI tech can compress LLM chatbot conversation memory by 3–4 times
AI

AI tech can compress LLM chatbot conversation memory by 3–4 times

November 7, 2025
0

November 7, 2025 The GIST AI tech can compress LLM chatbot conversation memory by 3–4 times Gaby Clark scientific editor Robert Egan associate editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:...

Read moreDetails
Magnetic materials discovered by AI could reduce rare earth dependence

Magnetic materials discovered by AI could reduce rare earth dependence

November 7, 2025
Zuckerbergs put AI at heart of pledge to cure diseases

Zuckerbergs put AI at heart of pledge to cure diseases

November 7, 2025
OpenAI boss calls on governments to build AI infrastructure

OpenAI boss calls on governments to build AI infrastructure

November 7, 2025
Universal Music went from suing an AI company to partnering with it. What will it mean for artists?

Universal Music went from suing an AI company to partnering with it. What will it mean for artists?

November 7, 2025
‘Vibe coding’ named word of the year by Collins dictionary

‘Vibe coding’ named word of the year by Collins dictionary

November 7, 2025
Design principles for more reliable and trustworthy AI artists

Design principles for more reliable and trustworthy AI artists

November 7, 2025

Recent News

AI tech can compress LLM chatbot conversation memory by 3–4 times

AI tech can compress LLM chatbot conversation memory by 3–4 times

November 7, 2025

Ripple President Monica Long Issues Statement Following Rumors

November 7, 2025
Meta says it will invest $600 billion in the US, with AI data centers front and center

Meta says it will invest $600 billion in the US, with AI data centers front and center

November 7, 2025
Magnetic materials discovered by AI could reduce rare earth dependence

Magnetic materials discovered by AI could reduce rare earth dependence

November 7, 2025

TOP News

  • Russia Booted From FIFA and UEFA Soccer Events, Including World Cup

    570 shares
    Share 228 Tweet 143
  • Elections 2024: How AI will fool voters if we don’t do something now

    559 shares
    Share 224 Tweet 140
  • The US government is no longer briefing Meta about foreign influence campaigns

    556 shares
    Share 222 Tweet 139
  • Logitech’s Litra Glow streamer light falls to a new low of $40

    555 shares
    Share 222 Tweet 139
  • Meta, X, TikTok, Snap and Discord CEOs will testify before the Senate over online child safety

    617 shares
    Share 247 Tweet 154
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved