April 1, 2025
The GIST Editors' notes
This text has been reviewed in response to Science X's editorial course of and insurance policies. Editors have highlighted the next attributes whereas guaranteeing the content material's credibility:
fact-checked
trusted supply
proofread
AI thinks like us—flaws and all: Research finds ChatGPT mirrors human resolution biases in half the checks

Can we actually belief AI to make higher selections than people? A brand new examine says … not at all times. Researchers have found that OpenAI's ChatGPT, some of the superior and common AI fashions, makes the identical sorts of decision-making errors as people in some conditions—displaying biases like overconfidence of hot-hand (gambler's) fallacy—but appearing inhuman in others (e.g., not affected by base-rate neglect or sunk price fallacies).
Revealed within the Manufacturing & Service Operations Administration journal, the examine reveals that ChatGPT doesn't simply crunch numbers—it "thinks" in methods eerily just like people, together with psychological shortcuts and blind spots. These biases stay somewhat steady throughout completely different enterprise conditions however could change as AI evolves from one model to the following.
AI: A wise assistant with human-like flaws
The examine, "A Supervisor and an AI Stroll right into a Bar: Does ChatGPT Make Biased Selections Like We Do?," put ChatGPT by way of 18 completely different bias checks. The outcomes?
- AI falls into human resolution traps—ChatGPT confirmed biases like overconfidence or ambiguity aversion, and conjunction fallacy (aka because the "Linda drawback"), in practically half the checks.
- AI is nice at math, however struggles with judgment calls—It excels at logical and probability-based issues however stumbles when selections require subjective reasoning.
- Bias isn't going away—Though the newer GPT-4 mannequin is extra analytically correct than its predecessor, it generally displayed stronger biases in judgment-based duties.
Why this issues
From job hiring to mortgage approvals, AI is already shaping main selections in enterprise and authorities. But when AI mimics human biases, might or not it’s reinforcing dangerous selections as a substitute of fixing them?
"As AI learns from human knowledge, it might additionally suppose like a human—biases and all," says Yang Chen, lead creator and assistant professor at Western College. "Our analysis reveals when AI is used to make judgment calls, it generally employs the identical psychological shortcuts as folks."
The examine discovered that ChatGPT tends to:
- Play it secure—AI avoids threat, even when riskier decisions would possibly yield higher outcomes.
- Overestimate itself—ChatGPT assumes it's extra correct than it truly is.
- Search affirmation—AI favors info that helps present assumptions, somewhat than difficult them.
- Keep away from ambiguity—AI prefers alternate options with extra sure info and fewer ambiguity.
"When a call has a transparent proper reply, AI nails it—it’s higher at discovering the precise components than most individuals are," says Anton Ovchinnikov of Queen's College. "However when judgment is concerned, AI could fall into the identical cognitive traps as folks."
So, can we belief AI to make large selections?
With governments worldwide engaged on AI rules, the examine raises an pressing query: Ought to we depend on AI to make essential calls when it may be simply as biased as people?
"AI isn't a impartial referee," says Samuel Kirshner of UNSW Enterprise Faculty. "If left unchecked, it won’t repair decision-making issues—it might really make them worse."
The researchers say that's why companies and policymakers want to watch AI's selections as carefully as they might a human decision-maker.
"AI needs to be handled like an worker who makes essential selections—it wants oversight and moral pointers," says Meena Andiappan of McMaster College. "In any other case, we threat automating flawed considering as a substitute of bettering it."
What's subsequent?
The examine's authors advocate common audits of AI-driven selections and refining AI techniques to cut back biases. With AI's affect rising, ensuring it improves decision-making—somewhat than simply replicating human flaws—might be key.
"The evolution from GPT-3.5 to 4.0 suggests the newest fashions have gotten extra human in some areas, but much less human however extra correct in others," says Tracy Jenkin of Queen's College. "Managers should consider how completely different fashions carry out on their decision-making use instances and frequently re-evaluate to keep away from surprises. Some use instances will want vital mannequin refinement."
Extra info: Yang Chen et al, A Supervisor and an AI Stroll right into a Bar: Does ChatGPT Make Biased Selections Like We Do?, Manufacturing & Service Operations Administration (2025). DOI: 10.1287/msom.2023.0279
Supplied by Institute for Operations Analysis and the Administration Sciences Quotation: AI thinks like us—flaws and all: Research finds ChatGPT mirrors human resolution biases in half the checks (2025, April 1) retrieved 1 April 2025 from https://techxplore.com/information/2025-04-ai-flaws-chatgpt-mirrors-human.html This doc is topic to copyright. Aside from any truthful dealing for the aim of personal examine or analysis, no half could also be reproduced with out the written permission. The content material is offered for info functions solely.
Discover additional
How AI bias shapes every part from hiring to well being care 0 shares
Feedback to editors
