Might 14, 2025
The GIST Editors' notes
This text has been reviewed in accordance with Science X's editorial course of and insurance policies. Editors have highlighted the next attributes whereas making certain the content material's credibility:
fact-checked
trusted supply
proofread
Can generative AI substitute people in qualitative analysis research?

Having people take part in a research will be time-consuming and costly for researchers stretching restricted budgets on strict deadlines. Subtle, generative giant language fashions (LLM) can full many duties, so some researchers and corporations have explored the thought of utilizing them in research as an alternative of human individuals.
Researchers from Carnegie Mellon College's Faculty of Laptop Science recognized basic limitations to utilizing LLMs in qualitative analysis targeted on a human's perspective, together with the methods info is gathered and aggregated and points surrounding consent and knowledge assortment.
"We regarded into this query of if LLM-based brokers can substitute human participation in qualitative analysis, and the high-level reply was no," mentioned Hoda Heidari, the Okay&L Gates Profession Growth Assistant Professor in Ethics and Computational Applied sciences in CMU's Software program and Societal Methods Division (S3D) and Machine Studying Division.
"There are all kinds of nuances that human individuals contribute that you just can’t presumably get out of LLM-based brokers, regardless of how good the know-how is."
The crew's paper, "Simulacrum of Tales: Analyzing Massive Language Fashions as Qualitative Analysis Contributors," acquired an honorable point out award on the Affiliation for Computing Equipment's Convention on Human Components in Computing Methods (CHI 2025) final week in Yokohama, Japan.
Crew members from SCS included Heidari; Shivani Kapania, a doctoral pupil within the Human-Laptop Interplay Institute (HCII); William Agnew, the Carnegie Bosch Postdoctoral Fellow within the HCII; Motahhare Eslami, an assistant professor within the HCII and S3D; and Sarah Fox, an assistant professor within the HCII.
The paper is on the market within the Proceedings of the 2025 CHI Convention on Human Components in Computing Methods.
LLMs are used as instruments in coaching throughout quite a lot of fields. Within the medical and authorized professions, these instruments enable professionals to simulate and observe real-life situations, akin to a therapist coaching to determine psychological well being crises. In qualitative analysis, which is commonly interview-based, LLMs are being educated to imitate human conduct of their responses to questions and prompts.
Within the research, the CMU crew interviewed 19 people with expertise in qualitative analysis. Contributors interacted with an LLM chatbot-style software, typing messages backwards and forwards. The software allowed researchers to check LLM-generated knowledge with human-generated knowledge and mirror on moral considerations.
Researchers recognized a number of methods utilizing LLMs as research individuals launched limitations to scientific inquiry, together with the mannequin's technique of gathering and deciphering information. Research individuals famous that the LLM software usually compiled its solutions from a number of sources and match them—typically unnaturally—into one response.
For instance, in a research about manufacturing facility working situations, a employee on the ground and a supervisor would doubtless have totally different responses about quite a lot of elements of the work and office. But an LLM participant producing responses would possibly mix these two views into one reply—conflating attitudes in methods not reflective of actuality.
One other approach the LLM responder launched issues into the scientific inquiry course of was within the type of consent. Within the paper, the researchers observe that LLMs educated on publicly accessible knowledge from a social media platform might increase questions on knowledgeable consent and if the folks whose knowledge the fashions are educated on have the choice to decide out.
Total, the research raises doubts about utilizing LLMs as research individuals, noting moral considerations and questions concerning the validity of those instruments.
"These fashions are encoded with the biases, assumptions and energy dynamics of mannequin producers and the info and contexts from which they’re derived," the researchers wrote. "As such, their use in analysis reshapes the character of the information produced, usually in ways in which reinforce present hierarchies and exclusions."
Extra info: Shivani Kapania et al, Simulacrum of Tales: Analyzing Massive Language Fashions as Qualitative Analysis Contributors, Proceedings of the 2025 CHI Convention on Human Components in Computing Methods (2025). DOI: 10.1145/3706598.3713220
Offered by Carnegie Mellon College Quotation: Can generative AI substitute people in qualitative analysis research? (2025, Might 14) retrieved 14 Might 2025 from https://techxplore.com/information/2025-05-generative-ai-humans-qualitative.html This doc is topic to copyright. Aside from any truthful dealing for the aim of personal research or analysis, no half could also be reproduced with out the written permission. The content material is offered for info functions solely.
Discover additional
Commentary on article on coding hate speech presents nuanced take a look at limits of AI techniques 0 shares
Feedback to editors