CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Wednesday, October 29, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Literary character approach helps LLMs simulate more human-like personalities

October 29, 2025
158
0

October 29, 2025 feature

The GIST Literary character approach helps LLMs simulate more human-like personalities

Related Post

Australian police design AI tool to decipher predators’ Gen Z slang

Australian police design AI tool to decipher predators’ Gen Z slang

October 29, 2025
AI chatbots are becoming everyday tools for mundane tasks, use data shows

AI chatbots are becoming everyday tools for mundane tasks, use data shows

October 29, 2025
Ingrid Fadelli

contributing writer

Gaby Clark

scientific editor

Robert Egan

associate editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

Study explores the extent to which LLMs can simulate human-like personalities
Figure outlining the team's evaluation methodology aimed at catching the convergence of simulated personalities toward human personalities. Credit: Bai et al.

After the advent of ChatGPT, the use of large language models (LLMs) has become increasingly widespread worldwide. LLMs are artificial intelligence (AI) systems trained on large sets of written texts, which can rapidly process queries in various languages and generate responses that sometimes appear to be written by humans.

As these systems become increasingly advanced, they could be used to realize virtual characters that simulate human personalities and behaviors. In addition, several researchers are now conducting psychology and behavioral science research involving LLMs, for instance, testing their performance on specific tasks and comparing it to that of humans.

Researchers at Hebei Petroleum University of Technology and Beijing Institute of Technology recently carried out a study aimed at assessing the ability of LLMs to simulate human personality traits and behaviors. Their paper, published on the arXiv preprint server, introduces a new framework to assess the consistency and realism of constructed identities (i.e., personas) or characters expressed by LLMs, while also reporting several important findings—including the discovery of a scaling law governing persona realism.

"Using LLMs to drive social simulations is clearly a major research frontier," Tianyu Huang, co-author of the paper, told Tech Xplore. "Compared with controlled experiments in natural sciences, social experiments are costly—sometimes even historically costly for humankind. Even for much smaller-scale domains like business or public policy, the potential applications are vast.

"From the perspective of LLM research itself, these models already exhibit impressive mathematical and logical abilities. Some studies even suggest that they internalize temporal and spatial concepts. Whether LLMs can further infer human attributes and thus engage with the humanities represents another major question."

Study explores the extent to which LLMs can simulate human-like personalities
Figure outlining marginal density in the convergence of simulated personalities toward human personalities. Credit: Bai et al.

A key challenge in the emulation of human-like traits and abilities using LLMs is the systematic bias often exhibited by existing models. Most earlier works tried to tackle this problem case by case, for instance by adjusting identifiable biases in training datasets or individual outputs produced by models. In contrast, Huang and his colleagues set out to develop a general framework that would address the root causes of LLM biases.

"First, we point out a methodological misconception in the current literature, namely that many researchers directly apply psychometric validity testing methods developed for humans to assess LLMs' personality simulation," explained Yuqi Bai, co-author of the paper. "We argue this is a categorical mismatch. Our approach steps back to a broader view—focusing not on isolated validity metrics but on the overall patterns."

As part of their study, the researchers tried to determine if the statistical characteristics of the personalities simulated by LLMs converged with the patterns observed in humans. Rather than trying to pin-point the characteristics that LLM and human personalities currently have in common, the team hoped to outline a path or a set of variables that would lead to the gradual convergence of AI and human personalities.

"Our study went through a period of deep confusion," said Bai. "Using LLM-generated persona profiles initially led to strong systematic biases, and prompt engineering showed limited effect—just as others had found. Progress stalled. Then, during a team discussion, we realized that when LLMs generate persona profiles, they often behave as if writing a résumé—highlighting positive traits and suppressing negatives."

Eventually, Huang, Bai and their colleagues decided to assess the personalities that LLMs would convey in novels. As fictional literary works are often effective in capturing the complexity of human emotions and behavior, they asked LLMs to write their own novels.

Study explores the extent to which LLMs can simulate human-like personalities
Figure outlining the age curve in the convergence of simulated personalities toward human personalities. Credit: Bai et al.

"This became our third population-level experiment, and the results were remarkable, as the systematic bias was drastically reduced," said Bai. "Later experiments using Wikipedia literary characters showed simulated personality distributions converging much closer to human data. The conclusion was clear: detail and realism can overcome systematic bias."

The findings gathered by these researchers suggest that LLMs can partially emulate human personality traits. Moreover, these models' ability to simulate realistic personas was found to improve when they were provided with richer and more detailed descriptions of the 'virtual character' they were expected to be.

"Our main contribution is identifying persona detail level as the key variable determining the effectiveness of LLM-driven social simulations," explained Kun Sun, co-author of the paper.

"From an application perspective, social platforms and LLM API providers already possess massive, detail-rich user profile data—forming a powerful foundation for social simulation. This presents both tremendous commercial potential and serious ethical and privacy concerns. Preventing manipulative control and safeguarding human autonomy are therefore critical challenges."

In the future, this recent study could inform the development of conversational AI agents or virtual characters that realistically simulate specific personas. In addition, it could inspire research exploring the risks of AI-simulated personas and introduce methods to limit or detect the unethical use of LLM-based virtual characters.

Meanwhile, the team plans to further investigate the scaling law guiding the LLM simulation of human personalities. For instance, they would like to train models on richer persona datasets or employ more sophisticated data management tools.

"We also plan to explore whether similar scaling phenomena appear in other human-like traits such as values," added Sun and Yuting Chen. "Use linear regression-based probing techniques to examine whether LLMs have internalized prior distributions about human attributes within their latent representations. Understanding this implicit world model may reveal the underlying mechanism behind human traits simulation."

Written for you by our author Ingrid Fadelli, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.

More information: Yuqi Bai et al, Scaling Law in LLM Simulated Personality: More Detailed and Realistic Persona Profile Is All You Need, arXiv (2025). DOI: 10.48550/arxiv.2510.11734

Journal information: arXiv

© 2025 Science X Network

Citation: Literary character approach helps LLMs simulate more human-like personalities (2025, October 29) retrieved 29 October 2025 from https://techxplore.com/news/2025-10-literary-character-approach-llms-simulate.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Anthropic says they've found a new way to stop AI from turning evil

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

Australian police design AI tool to decipher predators’ Gen Z slang
AI

Australian police design AI tool to decipher predators’ Gen Z slang

October 29, 2025
0

October 29, 2025 The GIST Australian police design AI tool to decipher predators' Gen Z slang Andrew Zinin lead editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility: fact-checked reputable news agency...

Read moreDetails
AI chatbots are becoming everyday tools for mundane tasks, use data shows

AI chatbots are becoming everyday tools for mundane tasks, use data shows

October 29, 2025
Fighting AI with AI: System protects personal voice data from automated surveillance

Fighting AI with AI: System protects personal voice data from automated surveillance

October 28, 2025
AI use makes us overestimate our cognitive performance, study reveals

AI use makes us overestimate our cognitive performance, study reveals

October 28, 2025
OpenAI says a million ChatGPT users talk about suicide

OpenAI says a million ChatGPT users talk about suicide

October 28, 2025
OpenAI may move forward with new business structure, partnership with Microsoft, regulator says

OpenAI may move forward with new business structure, partnership with Microsoft, regulator says

October 28, 2025
AI-generated fakes proliferate as Hurricane Melissa nears Jamaica

AI-generated fakes proliferate as Hurricane Melissa nears Jamaica

October 28, 2025

Recent News

Australian police design AI tool to decipher predators’ Gen Z slang

Australian police design AI tool to decipher predators’ Gen Z slang

October 29, 2025

Legendary Bitcoin Whale James Wynn Announces His New Target Altcoin! “I’ll Make a Big Investment!”

October 29, 2025
YouTube will ‘strengthen’ enforcement around violent and gambling games in November

YouTube will ‘strengthen’ enforcement around violent and gambling games in November

October 29, 2025
Literary character approach helps LLMs simulate more human-like personalities

Literary character approach helps LLMs simulate more human-like personalities

October 29, 2025

TOP News

  • After OpenAI’s new ‘buy it in ChatGPT’ trial, how soon will AI be online shopping for us?

    After OpenAI’s new ‘buy it in ChatGPT’ trial, how soon will AI be online shopping for us?

    614 shares
    Share 246 Tweet 154
  • XRP Price Gains Traction — Buyers Pile In Ahead Of Key Technical Breakout

    567 shares
    Share 227 Tweet 142
  • Discord launches a virtual currency

    569 shares
    Share 228 Tweet 142
  • Apple is reportedly getting ready to introduce ads to its Maps app

    536 shares
    Share 214 Tweet 134
  • Relive the Commodore 64’s glory days with a slimmer, blacked-out remake

    535 shares
    Share 214 Tweet 134
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved