April 28, 2025
The GIST Editors' notes
This text has been reviewed in accordance with Science X's editorial course of and insurance policies. Editors have highlighted the next attributes whereas making certain the content material's credibility:
fact-checked
trusted supply
written by researcher(s)
proofread
Individuals belief authorized recommendation generated by ChatGPT greater than a lawyer—new research

Individuals who aren't authorized specialists are extra prepared to depend on authorized recommendation offered by ChatGPT than by actual legal professionals—not less than, after they don't know which of the 2 offered the recommendation. That's the important thing discovering of our new analysis, which highlights some essential issues about the best way the general public more and more depends on AI-generated content material. We additionally discovered the general public has not less than some capability to determine whether or not the recommendation got here from ChatGPT or a human lawyer.
AI instruments like ChatGPT and different giant language fashions (LLMs) are making their means into our on a regular basis life. They promise to offer fast solutions, generate concepts, diagnose medical signs, and even assist with authorized questions by offering concrete authorized recommendation.
However LLMs are identified to create so-called "hallucinations"—that’s, outputs containing inaccurate or nonsensical content material. This implies there’s a actual threat related to individuals counting on them an excessive amount of, significantly in high-stakes domains equivalent to regulation. LLMs are likely to current recommendation confidently, making it troublesome for individuals to tell apart good recommendation from decisively voiced dangerous recommendation.
We ran three experiments on a complete of 288 individuals. Within the first two experiments, contributors got authorized recommendation and requested which they’d be prepared to behave on. When individuals didn't know if the recommendation had come from a lawyer or an AI, we discovered they have been extra prepared to depend on the AI-generated recommendation. Which means that if an LLM provides authorized recommendation with out disclosing its nature, individuals could take it as truth and like it to skilled recommendation by legal professionals—probably with out questioning its accuracy.
Even when contributors have been advised which recommendation got here from a lawyer and which was AI-generated, we discovered they have been prepared to comply with ChatGPT simply as a lot because the lawyer.
One cause LLMs could also be favored, as we present in our research, is that they use extra advanced language. Then again, actual legal professionals tended to make use of easier language however use extra phrases of their solutions.
The third experiment investigated whether or not contributors might distinguish between LLM and lawyer-generated content material when the supply just isn’t revealed to them. The excellent news is they will—however not by very a lot.
In our job, random guessing would have produced a rating of 0.5, whereas good discrimination would have produced a rating of 1.0. On common, contributors scored 0.59, indicating efficiency that was barely higher than random guessing, however nonetheless comparatively weak.
Regulation and AI literacy
It is a essential second for analysis like ours, as AI-powered programs equivalent to chatbots and LLMs have gotten more and more built-in into on a regular basis life. Alexa or Google Dwelling can act as a house assistant, whereas AI-enabled programs may help with advanced duties equivalent to on-line procuring, summarizing authorized texts, or producing medical data.
But this comes with important dangers of creating probably life-altering selections which can be guided by hallucinated misinformation. Within the authorized case, AI-generated, hallucinated recommendation might trigger pointless problems and even miscarriages of justice.
That's why it has by no means been extra essential to correctly regulate AI. Makes an attempt up to now embody the EU AI Act, article 50.9 of which states that text-generating AIs ought to guarantee their outputs are "marked in a machine-readable format and detectable as artificially generated or manipulated".
However that is solely a part of the answer. We'll additionally want to enhance AI literacy in order that the general public is healthier capable of critically assess content material. When individuals are higher capable of acknowledge AI they'll be capable of make extra knowledgeable selections.
Which means that we have to study to query the supply of recommendation, perceive the capabilities and limitations of AI, and emphasize the usage of important considering and customary sense when interacting with AI-generated content material. In sensible phrases, this implies cross-checking essential info with trusted sources and together with human specialists to stop overreliance on AI-generated info.
Within the case of authorized recommendation, it might be superb to make use of AI for some preliminary questions: "What are my choices right here? What do I have to learn up on? Are there any comparable instances to mine, or what space of regulation is that this?" However it's essential to confirm the recommendation with a human lawyer lengthy earlier than ending up in courtroom or performing upon something generated by an LLM.
AI is usually a priceless device, however we should use it responsibly. Through the use of a two-pronged method which focuses on regulation and AI literacy, we are able to harness its advantages whereas minimizing its dangers.
Offered by The Dialog
This text is republished from The Dialog below a Inventive Commons license. Learn the unique article.
Quotation: Individuals belief authorized recommendation generated by ChatGPT greater than a lawyer—new research (2025, April 28) retrieved 28 April 2025 from https://techxplore.com/information/2025-04-people-legal-advice-generated-chatgpt.html This doc is topic to copyright. Other than any honest dealing for the aim of personal research or analysis, no half could also be reproduced with out the written permission. The content material is offered for info functions solely.
Discover additional
Belief your physician: Research exhibits human medical professionals are extra dependable than synthetic intelligence instruments shares
Feedback to editors
