CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
Friday, November 7, 2025
No Result
View All Result
CRYPTOREPORTCLUB
  • Crypto news
  • AI
  • Technologies
No Result
View All Result
CRYPTOREPORTCLUB

Chatbot dreams generate AI nightmares for Bay Area lawyers

October 8, 2025
158
0

October 8, 2025

The GIST Chatbot dreams generate AI nightmares for Bay Area lawyers

Related Post

AI tech can compress LLM chatbot conversation memory by 3–4 times

AI tech can compress LLM chatbot conversation memory by 3–4 times

November 7, 2025
Magnetic materials discovered by AI could reduce rare earth dependence

Magnetic materials discovered by AI could reduce rare earth dependence

November 7, 2025
Sadie Harley

scientific editor

Andrew Zinin

lead editor

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

reputable news agency

proofread

court room
Credit: Unsplash/CC0 Public Domain

A Palo Alto, California, lawyer with nearly a half-century of experience admitted to an Oakland federal judge this summer that legal cases he referenced in an important court filing didn't actually exist and appeared to be products of artificial intelligence "hallucinations."

Jack Russo, in a court filing, described the apparent AI fabrications as a "first-time situation" for him and added, "I am quite embarrassed about it."

A specialist in computer law, Russo found himself in the rapidly growing company of lawyers publicly shamed as wildly popular but error-prone artificial intelligence technology like ChatGPT collides with the rigid rules of legal procedure.

Hallucinations—when AI produces inaccurate or nonsensical information—have posed an ongoing problem in the generative AI that has birthed a Silicon Valley frenzy since San Francisco's OpenAI released its ChatGPT bot in late 2022.

In the legal arena, AI-generated errors are drawing heightened scrutiny as lawyers flock to the technology, and irate judges are making referrals to disciplinary authorities and, in dozens of U.S. cases since 2023, levying financial penalties of up to $31,000, including a California-record fine of $10,000 last month in a Southern California case.

Chatbots respond to users' prompts by drawing on vast troves of data and use pattern analysis and sophisticated guesswork to produce results. Errors can occur for many reasons, including insufficient or flawed AI-training data or incorrect assumptions by the AI. It affects not just lawyers, but ordinary people seeking information, as when Google's AI overviews last year told users to eat rocks, and add glue to pizza sauce to keep the cheese from sliding off.

Russo told Judge Jeffrey White he took full responsibility for not ensuring the filing was factual, but said a long recovery from COVID-19 at an age beyond 70 led him to delegate tasks to support staff without "adequate supervision protocols" in place.

"No sympathy here," internet law professor Eric Goldman of Santa Clara University said. "Every lawyer can tell a sob story, but I'm not falling for it. We have rules that require lawyers to double-check what they file."

The judge wrote in a court order last month that Russo's AI-dreamed fabrications were a first for him, too. Russo broke a federal court rule by failing to adequately check his motion to throw out a contract dispute case, White wrote. The court, the judge sniped, "has been required to divert its attention from the merits of this and other cases to address this issue."

White issued a preliminary order requiring Russo to pay some of the opposing side's legal fees. Russo told White his firm, Computerlaw Group, had "taken steps to fix and prevent a reoccurrence." Russo declined to answer questions from this news organization.

As recently as mid-2023, it was a novelty to find a lawyer facing a reprimand for submitting court filings referring to nonexistent cases conjured up by artificial intelligence, but now such incidents arise nearly by the day, and even judges have been implicated, according to a database compiled by Damien Charlotin, a senior fellow at French business school HEC Paris who is tracking worldwide legal filings containing AI hallucinations.

"I think the acceleration is still ongoing," Charlotin said.

Charlotin said his database includes "a surprising number" of lawyers who are sloppy, reckless or "plain bad."

In May, San Francisco lawyer Ivana Dukanovic admitted in the U.S. District Court in San Jose to an "embarrassing and unintentional mistake" by herself and others at legal firm Latham & Watkins.

While representing San Francisco AI giant Anthropic in a music copyright case, they submitted a filing with hallucinated material, Dukanovic wrote. Dukanovic—whose company bio lists "artificial intelligence" as one of her areas of legal practice—blamed the creation of the false information on a particular chatbot: Claude.ai, the flagship product of her client Anthropic.

Judge Susan van Keulen ordered part of the filing removed from the court record. Dukanovic, who, along with her firm, appears to have dodged sanctions, did not respond to requests for comment.

Charlotin has found 113 U.S. cases involving lawyers submitting filings with hallucinated material, mostly legal-case citations, that have been the subject of court decisions since mid-2023. He believes many court submissions with AI fabrications are never caught, potentially affecting case outcomes.

Court decisions can have "life-changing consequences," including in matters involving child custody or disability claims, law professor Goldman said.

"The stakes in some cases are so high, and if someone is distorting the judge's decision-making, the system breaks down," Goldman said.

Still, AI can be a useful tool for lawyers, finding information people might miss, and helping to prepare documents, he said. "If people use AI wisely, it helps them do a better job," Goldman said. "That's pushing everyone to adopt it."

Survey results released in April by the American Bar Association, the nation's largest lawyers group, found that AI use by law firms almost tripled last year to 30% of responding law offices from 11% in 2023, and that ChatGPT was the "clear leader across firms of every size."

Fines may be the least of a lawyer's worries, Goldman said. A judge could flag an attorney to their licensing organization for discipline, or dismiss a case, or reject a key filing, or view everything the lawyer does in the case with skepticism. A client could sue for malpractice. Orders to pay the other side's legal fees can require six-figure payments.

Charlotin's database shows judges slapping many lawyers with warnings or referrals to disciplinary authorities, and sometimes purging all or part of a filing from the court record, or ordering payment of the opposition's fees. Last year, a federal appeals court in California threw out an appeal it said was "replete with misrepresentations and fabricated case law," including "two cases that do not appear to exist."

Charlotin expects his database to keep swelling.

"I don't really see it decrease on the expected lines of 'surely everyone should know by now,'" Charlotin said.

#YR@ MediaNews Group, Inc. Distributed by Tribune Content Agency, LLC.

Citation: Chatbot dreams generate AI nightmares for Bay Area lawyers (2025, October 8) retrieved 8 October 2025 from https://techxplore.com/news/2025-10-chatbot-generate-ai-nightmares-bay.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

More people are using AI in court, not a lawyer. It could cost you money—and your case

Feedback to editors

Share212Tweet133ShareShare27ShareSend

Related Posts

AI tech can compress LLM chatbot conversation memory by 3–4 times
AI

AI tech can compress LLM chatbot conversation memory by 3–4 times

November 7, 2025
0

November 7, 2025 The GIST AI tech can compress LLM chatbot conversation memory by 3–4 times Gaby Clark scientific editor Robert Egan associate editor Editors' notes This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:...

Read moreDetails
Magnetic materials discovered by AI could reduce rare earth dependence

Magnetic materials discovered by AI could reduce rare earth dependence

November 7, 2025
Zuckerbergs put AI at heart of pledge to cure diseases

Zuckerbergs put AI at heart of pledge to cure diseases

November 7, 2025
OpenAI boss calls on governments to build AI infrastructure

OpenAI boss calls on governments to build AI infrastructure

November 7, 2025
Universal Music went from suing an AI company to partnering with it. What will it mean for artists?

Universal Music went from suing an AI company to partnering with it. What will it mean for artists?

November 7, 2025
‘Vibe coding’ named word of the year by Collins dictionary

‘Vibe coding’ named word of the year by Collins dictionary

November 7, 2025
Design principles for more reliable and trustworthy AI artists

Design principles for more reliable and trustworthy AI artists

November 7, 2025

Recent News

AI tech can compress LLM chatbot conversation memory by 3–4 times

AI tech can compress LLM chatbot conversation memory by 3–4 times

November 7, 2025

Ripple President Monica Long Issues Statement Following Rumors

November 7, 2025
Meta says it will invest $600 billion in the US, with AI data centers front and center

Meta says it will invest $600 billion in the US, with AI data centers front and center

November 7, 2025
Magnetic materials discovered by AI could reduce rare earth dependence

Magnetic materials discovered by AI could reduce rare earth dependence

November 7, 2025

TOP News

  • Russia Booted From FIFA and UEFA Soccer Events, Including World Cup

    570 shares
    Share 228 Tweet 143
  • Elections 2024: How AI will fool voters if we don’t do something now

    559 shares
    Share 224 Tweet 140
  • The US government is no longer briefing Meta about foreign influence campaigns

    556 shares
    Share 222 Tweet 139
  • Logitech’s Litra Glow streamer light falls to a new low of $40

    555 shares
    Share 222 Tweet 139
  • Meta, X, TikTok, Snap and Discord CEOs will testify before the Senate over online child safety

    617 shares
    Share 247 Tweet 154
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
Advertising: digestmediaholding@gmail.com

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Crypto news
  • AI
  • Technologies

Disclaimer: Information found on cryptoreportclub.com is those of writers quoted. It does not represent the opinions of cryptoreportclub.com on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
cryptoreportclub.com covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.

© 2023-2025 Cryptoreportclub. All Rights Reserved