This is the second in a two-part series on AI in the judicial system.

Three years after an artificial intelligence (AI) system first passed the bar exam, the American legal system is being pulled in two directions.

In one direction is overconfidence, which has resulted in well-publicized courtroom mistakes. In the other is risk aversion, which has provoked sanctions, restrictions, and even outright bans. Both reactions are understandable, but neither will ultimately serve the public good. Threading the needle between utopian promises and AI doomerism requires open-minded judges to test AI deliberately, learn its limits, and proceed with caution.

Practical Courtroom AI

Early adopters include Federal Magistrate Judge Allison Goddard, who keeps an AI model open on her computer throughout the day to search case records, and Federal Judge Xavier Rodriguez of the Western District of Texas, who uses it to draft questions for hearings and summarize testimony. Both draw the line at activities requiring greater discretion, such as bail or custody decisions. Their philosophy mirrors a four-tier risk framework developed by the National Center for State Courts to sort AI risk by its potential to violate constitutional rights, from low-risk administrative tasks to high-stakes sentencing recommendations.

AI Risk Framework

Risk Level
Description
Example Tasks
Human Role
Minimal
Low-risk, routine uses
Drafting emails, summarizing meeting notes
Supervisory oversight
Moderate
Uses requiring greater scrutiny
Drafting opinions, conducting legal research
Active accuracy verification
High
Uses that may impact a person’s rights
Drafting opinions, conducting legal research
Active accuracy verification
Unacceptable
Too consequential to delegate
Automated decisions about incarceration, family relations, health
AI should not be used

Source: National Center for State Courts, “Principles & Practices for AI Use in Courts”

AI risk frameworks are useful countermeasures for “automation bias”—the tendency for humans to over-rely on automated systems. This cognitive bias has ensnared federal judges in Mississippi and New Jersey, who were forced to withdraw rulings after litigants pointed out nonexistent allegations, misstated case outcomes, and made-up quotes. While attorneys face sanctions and public embarrassment for AI-generated errors, judges face few formal consequences. That is beginning to change. In January 2026, the California Senate passed SB 574, a first-of-its-kind bill that bars judges from delegating decision-making authority to AI.

But early failures do not mean a given technology is a dead end. Advocates argue that offloading labor-intensive work to AI could speed up the court process, offering relief to the 68 percent of state courts that experienced staff shortages in the past year. Labor challenges jeopardize defendants’ right to a speedy trial and extend pretrial detention—the primary driver of jail population growth. Meanwhile, only 17 percent of state courts are currently using generative AI, and 70 percent still prohibit employees from using AI tools for court business.

Navigating the AI Compliance Maze

So far, bar associations or court systems in 38 states have established policies, rules, or guidance governing AI use by legal professionals. Specifics vary between courtrooms, driven more by individual judges’ standing orders than any uniform vision. Some courts have explicitly declined to issue guidance, while others have banned AI altogether.

Many judges now require disclosure of AI-generated content. For example, the Northern District of Texas requires a statement on the first page of any AI-assisted filing. On the other hand, the New York Unified Court System has taken the opposite approach, discouraging disclosure requirements as a matter of statewide policy. The result is a system where AI policies shift depending on the court or even the specific judge assigned to a case.

The Problem Isn’t AI—It’s Bad Lawyers

When it comes to AI regulation, a light-touch approach is the way to go. Attorneys have been overstating arguments, citing irrelevant precedent, and making unsupported claims since long before computers were invented. Knowing whether AI helped draft a legal document has no bearing on the legal obligations of candor and accuracy. Policies that explicitly require review of all AI-assisted materials are redundant because attorneys are already bound by professional conduct rules to provide accurate, fully vetted citations in their briefs. Furthermore, the burden of understanding and complying with the patchwork of standing orders risks canceling out the benefits of AI could potentially provide.

The values required to integrate AI responsibly are the same ones that have guided American jurisprudence for 250 years. As long as they remain accountable for their work, experimentation is the only way lawyers, judges, and other legal professionals will retain their value in a rapidly changing knowledge economy.

Cultivating AI Literacy

Ultimately, digital natives whose understanding of the law formed alongside the technology will decide AI’s direction in the courtroom. In 2025, Ohio’s Case Western Reserve University School of Law became first in the nation to require certification in legal AI for all first-year law students. At least eight law schools followed suit within a year.

As machines take over a greater share of knowledge work, the U.S. court system must decide when and where human judgment remains non-negotiable. The answer is neither blind adoption nor blanket rejection—it is careful, principled stewardship.

The Criminal Justice and Civil Liberties program focuses on public policy reforms that prioritize public safety as well as due process, fiscal responsibility, and individual liberty.