Law firms are confronting the reality that while artificial intelligence (AI) can draft a brief in seconds, it can also hallucinate legal theories convincing enough to pass traditional review filters undetected.
As generative AI becomes a staple in legal drafting, a new risk has emerged: fabricated legal reasoning rather than merely fabricated facts. The resulting errors are increasingly leading to court-ordered sanctions.
Cat Casey, legal tech expert and partner at Masters AI Legal, said legal theory hallucinations are the trickiest to identify. A quick Westlaw or Lexis search, or even some of the more robust anti-hallucination tools like BriefCatch’s RealityCheck, are not equipped to flag this type of failure.
“A hallucinated legal theory passes every cite check and still blows up your case,” she told TechNewsWorld. “The occurrence of hallucinations in law offices is widespread and is seriously impacting court proceedings.”
Andrew Adams, partner and chief administrative officer at DarrowEverett, agrees that hallucinations and shadow AI — employees using unapproved AI tools — are two serious risks facing the legal industry today.
“AI is no longer an emerging issue for law firms. It is a current governance challenge with real consequences,” he told TechNewsWorld. “The lesson from 2025 and early 2026 is that no firm is immune.”
Courts Increase Scrutiny of AI Filings
Researcher Damien Charlotin maintains a database that tracks legal decisions in cases where generative AI produced hallucinated content involving fake citations and AI-generated arguments. Casey described the number of documented incidents as staggering.
She cited a recent tally in which the database cataloged over 1,369 legal decisions involving AI hallucinations. That number does not include the broader universe of hallucinated citations that likely go undetected in filings.
“That’s only what got caught. Courts don’t routinely verify every citation. Fabricated authorities pass undetected constantly, especially in cases that settle or where opposing counsel lacks resources to check,” Casey explained.
The courtroom impact is escalating. Casey noted that U.S. courts imposed over $145,000 in sanctions against law firms that submitted AI-hallucinated filings in the first quarter of this year.
“More than 300 federal judges have now adopted standing orders or local rules specifically addressing AI use in filings. Cases are getting delayed. Motions are getting stricken,” she said.
Two federal judges saw their own AI-assisted opinions challenged and withdrawn after opposing counsel flagged hallucinated citations. The bench is not immune, she added.
Legal Teams Face Growing AI Liability
Casey referenced a recent report revealing that 79% of lawyers use AI in some form in their practice. It is less a question of whether lawyers use it than of how they use it.
She offered that law office staff using unauthorized consumer-grade AI tools has become widespread. Over 68% of legal professionals admitted to using unapproved AI tools at least once in the past year.
“That number is likely far higher in reality, and less than 20% of firms have formal policies in place to manage the exposure,” she said, warning that firms risk compromising client confidentiality and privilege.
Adams pointed to a significant example of the risks posed by AI drafting: A federal court in Oregon imposed $110,000 in sanctions. Attorneys relied on AI-generated, fictitious case law and failed to take ownership of their wrongdoing.
“The court did refer to this case as an outlier, but it does demonstrate the substantial risk that law firms can face from both failing to properly review AI and then failing to rectify their mistakes,” he said.
Adams also referred to an April case in which an elite law firm filed an emergency letter admitting that AI-generated hallucinations had made it into a bankruptcy court filing. While these are among the more dramatic examples, the number of decisions citing hallucinated cases continues to rise.
He clarified that verification is not an option for lawyers. It is an ethical obligation.
“Under Rule 11 and its state rule corollaries, every attorney who signs a filing certifies that the legal contentions are warranted by existing law or by a non-frivolous argument for extending, modifying, or reversing existing law or for establishing new law. That certification cannot be outsourced to a machine,” he said.
Shadow AI Expands Legal Exposure
According to Adams, shadow AI is arguably a more insidious problem because it operates outside any governance framework. He noted that a recent National Cybersecurity Alliance survey found that 43% of employees using AI admitted to sharing sensitive company information with AI tools without their employer’s knowledge.
“In law firms, where we handle privileged communications, trade secrets, and litigation strategy, the exposure is exponential. Materials shared outside the attorney-client relationship can become discoverable in litigation, making shadow AI a particularly acute concern for legal professionals,” he said.
Casey sees a danger in assuming generative AI outputs are trustworthy simply because they come from established legal platforms. A 2024 Stanford-led study found hallucination rates of roughly 33% for Westlaw AI-Assisted Research and 17% for Lexis+ AI under benchmark testing conditions.
“Lawyers face the same sanction for a hallucination, whether it is from Westlaw or an individual AI platform,” she said. “The courts have not differentiated. The trusted brand was not a defense.”
Red Flags in AI-Generated Legal Briefs
Casey offered three main flavors for AI hallucinations. Each has its own hallmarks: wholesale case fabrication, fake quotes attributed to real cases, and real cases with an authentic citation attributed to an argument that has nothing to do with the case.
She said the following red flags often appear across the most common forms of hallucination:
- Too good to be true cases that fit a fact pattern too perfectly
- Opinions with balanced, clean prose and zero hedge
- Cases cited multiple times across differing arguments, fact patterns, or cases
- Cases not found in a primary source case database in a 30-second search
“Even absent any of these glaring red flags, any workflow that has reliance on AI research should have an audit component. Humans should always verify and then trust AI at this stage of the game,” Casey recommended.
Building AI Governance in Law Firms
Masters AI Legal provides a specialized learning ecosystem to train law firms and legal professionals on implementing generative AI. Lawyers must supervise, maintain technical competence, protect confidentiality, and maintain candor with the court, Casey noted.
She sees the challenge of adapting as AI moves from stand-alone tools to integration with key legal tools like Lexis and Westlaw, and even into word processing platforms.
“What changes is how invisible the AI becomes, and that invisibility is exactly where the risk lives,” she clarified.
Adams suggested that, aside from access controls and training, law firms must adapt to ensure proper governance of litigation filings and work product. This ensures that their deliverables are properly reviewed for hallucinations if an authorized, and therefore un-auditable, AI program was used.
“These risks have prompted a large-scale rethink of law firm governance. At DarrowEverett, we adopted our legal AI platform based on its ability to control, monitor, and audit usage, as well as its data storage and processing security,” Adams said.
He added that building robust governance frameworks requires careful attention to detail and ongoing compliance efforts. Some courts have now made clear, as in United States v. Heppner, that communications with unsecured third-party AI systems may not be privileged and may be used against parties in litigation.
“Firms and corporate legal departments that fail to treat AI governance with the same rigor as cybersecurity or conflicts management are exposing themselves to substantial risk,” he reiterated.





