Due Processing: As Lawyers Go All-In on AI, the Courts Play Catch-Up
This is the first in a two-part series on AI in the judicial system.
When Bradley Heppner sat down late last year to consult a chatbot about his legal woes, he assumed the conversation was private. Facing a $300 million securities fraud investigation, the former CEO used AI to brainstorm defense strategies and stress-test legal theories. In the privacy of the chat window, he treated the computer like another member of his defense team.
Unfortunately for Heppner, the Southern District of New York didn’t see it that way. In a landmark decision last month, the court ruled that AI-generated documents are not protected by attorney-client privilege. According to Judge Jed Rakoff, Heppner wasn’t talking to a lawyer—he was talking to a piece of software with clear terms of service. Even though “generative artificial intelligence presents a new frontier,” Rakoff wrote, “AI’s novelty does not mean that its use is not subject to longstanding legal principles.”
The Artificial Paralegal
The decision set an important, if seemingly obvious, precedent: Chatbots are not lawyers. But that hasn’t stopped prosecutors and defense attorneys from leaning on them heavily, often with little guidance on where the limits should be.
The district attorney in Montgomery County, Texas, uses AI to summarize handwritten documents, translate Spanish to English, and distill massive datasets from social media. Others are using it to manage the mountains of paperwork that often bottleneck the legal process. The Los Angeles County Public Defender’s office—which handles thousands of cases from dozens of arresting agencies—uses the technology to read incoming police reports, standardize the formats, and extract relevant information, saving thousands of staff hours per year.
Nowhere is the potential of AI more profound than in labor-intensive tasks like reviewing body camera video, which is now an element in over 80 percent of criminal cases. Prosecutors are using it to quickly search for critical moments, such as Miranda warnings or coercive interrogation techniques. Ideally, less time spent combing through hours of footage means more time for tasks that require a law degree.
If AI can improve lawyer productivity, it could bolster access to justice, provide victims faster resolution, and reduce the time people languish in jail before trial. The legal sector has long suffered from what economists call “cost disease,” where labor-intensive industries become more expensive relative to the broader economy. This effect is visible in the dramatic decline in trials in recent decades, as rising litigation costs push more cases toward settlement and plea bargains. AI-driven productivity gains could help reverse that trajectory.
But efficiency has never been the primary value when life and liberty are at stake—a lesson some have learned the hard way.
AI’s Credibility Gap
In late 2025, a federal judge in Alabama fined a legal team $5,000 for filing a motion containing caselaw fabricated by AI. It happened again a few months later in Wisconsin. Courts have responded with increasing severity, imposing steep fines and license suspensions for AI mistakes. A database that tracks AI hallucinations has documented nearly 700 instances in U.S. court filings since early 2025. Even specialized legal research platforms like Westlaw AI and Lexis+ AI are not immune from inaccuracies.
These errors carry a unique weight for prosecutors. Unlike a civil litigant in a contract dispute, prosecutors wield the coercive power of the state, so when mistakes find their way into court filings, they erode public confidence in the system. As the defense in a recent Nevada County, California, case argued, AI-generated errors represent an existential threat to due process. In response, the local district attorney implemented an office-wide policy directive and appointed an AI policy coordinator.
But not everyone in the judiciary sees these blunders as a crisis. As federal Judge Xavier Rodriguez of the Western District of Texas recently pointed out, flawed legal briefs are nearly as old as the common law tradition itself. “Lawyers have been hallucinating well before AI,” he said, noting that attorneys are already bound by professional conduct rules to verify the accuracy of their work. While courtroom errors may not be anything new, distinguishing between legal judgements and digital shortcuts presents a more profound challenge.
The Human Element
As AI commoditizes basic legal research, the courtroom will continue to run on persuasion. For now, skills like reading a witness’ body language or connecting with a jury remain firmly in human territory. Another area that still requires critical reasoning is the evaluation of emerging technology itself. Tools that use generative AI to convert body-worn camera audio into written reports blur the line between software and officer testimony, presenting an “audit trail problem” for legal professionals.
For this reason, in 2024 the King County Prosecuting Attorney’s Office in Seattle, Washington, declared it would not accept AI-assisted police narratives as evidence. Whether or not that approach is the right one, it reflects the kind of proactive policy thinking required today. Before problems arise, prosecutors and public defenders’ offices should develop formal AI use policies to assess use cases, evaluate confidentiality implications, and establish a process for vetting new tools.
The Heppner ruling may have clarified that chatbots aren’t lawyers, but it didn’t answer the harder question: What happens when lawyers become dependent on AI? If the scales of justice are to remain balanced, AI must be fenced in by old-fashioned transparency, accountability, and the ability to cross-examine the machine itself.