Judges and legal experts are raising alarms as generative artificial intelligence tools flood the legal profession, bringing a wave of botched filings, fake case citations, and mounting concerns about fraud in evidence.
The problem has grown urgent enough that in recent months, courts across the country have begun issuing orders clarifying the permissible uses of generative AI in litigation, often after being burned by attorneys who failed to vet what the machines produced, while some courts have issued no clear guidance.
“It’s like having a really high-energy, if somewhat naive, associate,” said Tennessee Attorney General Jonathan Skrmetti, speaking with the Washington Examiner about AI’s role in legal practice. “You need to make sure that you’re not making any misrepresentations based on the computer being stupid.”

Skrmetti said he expects courts to take a largely hands-off approach to how lawyers produce written work, so long as there’s transparency and human oversight. “As the [AI] products get better and more reliable, and there are more mechanisms built in, both at the attorney level and the technology level, to avoid hallucinated cases, I think courts will probably say, ‘However you produce this brief is fine,’” he said.
But that ideal has not yet materialized, in part due to a number of glaring mistakes in courts throughout the nation and the lack of uniform rules to address these problems.
Costly errors pile up in court across the country
Some attorneys say the root of the problem isn’t overreliance on technology — it’s exhaustion. Junior lawyers are increasingly using AI tools as a stopgap under pressure to meet impossible deadlines.
“A lot of associates are burnt out and they work long hours,” said attorney Harshita Ganesh, who advises her firm on AI policy. “And in the case of big law, you’re working crazy hours and you’re expected to have some crazy turnaround on deliverables.”
Ganesh said attorneys may turn to generative AI not because they fully trust it, but because they’re desperate for relief. “They’re tired and they’re exhausted, and they need a shortcut,” she said.
The pressure leading attorneys to rely more heavily on generative AI has already led to several high-profile mistakes.
In May, a California federal judge fined the law firm Ellis George $31,000 after a filing was found to contain false citations produced by Google Gemini and another legal AI tool. In a separate case, the AI company Anthropic acknowledged that its model Claude fabricated a citation in a filing submitted on its behalf in a copyright lawsuit.
Last month, a federal judge in Colorado ordered two lawyers representing MyPillow CEO Mike Lindell to pay $3,000 each after they submitted a brief filled with more than two dozen hallucinated case citations. The judge noted that the attorneys failed to provide a credible explanation and had not been forthcoming about their use of generative AI.
Another notable case emerged late last year when Michael Cohen, former attorney and fixer for Donald Trump, admitted in 2023 that he had unwittingly passed along AI-generated fake legal cases to his attorney. In a court filing unsealed in December, Cohen said he used Google Bard to search for legal citations as part of a motion to end his court supervision early. Believing Bard was simply a “super-charged search engine,” Cohen said he didn’t realize it could fabricate case law.
“As a non-lawyer, I have not kept up with emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like Chat-GPT, could show citations and descriptions that looked real but actually were not,” Cohen, who was disbarred in 2019, said at the time. “Instead, I understood it to be a super-charged search engine and had repeatedly used it in other contexts to (successfully) find accurate information online.”
Cohen’s attorney, David M. Schwartz, submitted the fabricated cases and blamed the error on a misunderstanding with another lawyer on the case. The false citations were later discovered by Cohen’s separate counsel, who alerted the court and federal prosecutors.
Judges and bar regulators weigh in
The most aggressive action came in 2023 from U.S. District Judge Brantley Starr, a Texas-based appointee of President Donald Trump, who ordered that any filings before his court must be either AI-free or reviewed by a human lawyer “using print reporters or traditional legal databases.”
Starr warned that AI is “unbound by any sense of duty, honor, or justice,” and said lawyers who violate the rule could face sanctions under Rule 11 of the Federal Rules of Civil Procedure.
David Coale, a constitutional law attorney at Lynn Pinker Hurst & Schwegmann, told the Washington Examiner that while the hallucinated citations problem is now familiar, the next frontier is even more dangerous: fake evidence.
“The real issue, just on the horizon, is hallucinated facts,” Coale said. “The coming problem, I think, is not lawyers misusing AI — it’s clients. Very angry people in child custody cases, for example, who put together ‘alternative facts’ to fool their own lawyers, opposing counsel, and the courts.”
Coale said that while lawyers may be learning to adapt to the technology’s quirks, clients could soon become the larger liability.
Legal ethicist Irina Raicu, who directs the Internet Ethics program at Santa Clara University’s Markkula Center for Applied Ethics, said much of the blame lies with how these tools are marketed.
“Lawyers have continued to submit filings with AI-generated errors, and that reflects both the fact that many people (lawyers included) don’t understand the limitations of generative AI tools and the fact that the marketing of such tools often downplays those limitations,” she said.
Raicu also pointed to a July 2023 opinion from the American Bar Association that outlined basic responsibilities for ethical AI use, but warned that the profession has a long way to go. “Professional guidance must go deeper,” Raicu said. “Stanford researchers, for example, noted last year that sycophancy, the tendency of many AI models to agree with and flatter the user, may present particular dangers for pro se litigants.”
Lack of guidance leads to confusion
While some judges, such as Starr, have taken the initiative to set clear policies, most federal and state courts have not. The result is a patchwork of expectations, or no guidance at all.
“I don’t think we’re going to have, like, perfect uniformity,” said Ganesh. “It will be a huge, mountainous task to get everybody on the same page about this.”
A 2025 survey from the National Center for State Courts and the Thomson Reuters Institute found that while 55% of court professionals see AI as having a “transformational” effect on operations, only 17% of courts currently use generative AI tools, and 70% prohibit court staff from using them altogether.
Have any judges missed a mistake in legal briefs?
Experts on the intersection of law and AI have said that so far, AI-induced hallucinations have been caught before they’ve had a chance to influence a judge’s ruling. But the increasing reliance on AI raises the stakes for human error.
“Judges are also human and their clerks are human,” Ganesh said. “So something may get missed. It’s not out of the realm of possibility.”
Even though legal filings typically go through multiple layers of review, such as associate, paralegal, and partner, Ganesh said that process “falls apart” when attorneys are stretched too thin.
Some experts told the Washington Examiner it is also not incumbent on attorneys to disclose their use of AI, likening it to revealing “tricks of the trade.” But others, such as Skrmetti, say transparency is key to ensuring trust between litigants and judges.
“From my perspective, the biggest issue with using AI as a lawyer is legal ethics,” Skrmetti said. “You need to make sure your clients know if you use it.”
He believes that as long as attorneys maintain oversight and transparency, regulation may prove unnecessary. “In the long run, I think there probably won’t be that much court regulation of AI,” so long as attorneys who use it are mindful not to include any errors in their briefing.
Embrace the future or fall behind?
Despite high-profile mistakes and uneven regulation, most legal experts agree that generative AI is here to stay — and that attorneys must learn to use it responsibly.
“I think it’s something we do need to embrace and we need to use,” said Ganesh. “But we do need to collectively come up with guidelines … and we need to make sure attorneys are reined in a bit when using it, which is easier said than done.”
Meanwhile, persistent staffing shortages, ballooning caseloads, and generational turnover continue to strain the system, which is exactly the kind of pressure driving attorneys to experiment with AI in the first place.
THE INTEGRATION ERA: HOW AI IS UPENDING OUR INSTITUTIONS
That disconnect between rapid technological change and slow institutional response leaves lawyers and courts in uncharted territory.
As Ganesh put it, “We’re in a data collection phase. It’s a dotted line right now at best.”