The courts are still in Phase I of their relationship with AI — but change is coming

This week, the Colorado Court of Appeals issued its first opinion cautioning litigants about relying on generative AI to draft legal briefs, joining a number of other courts that have similarly warned (and sometimes sanctioned) parties and lawyers for including “bogus” AI-generated case citations. 

Judicial pushback against the errors caused by ChatGPT and other early publicly available AI models is sound policy, no different than teachers balking at AI-generated student essays. The AI programs currently available to the public can be astonishing in their creativity, but are also prone to hallucination and more often than not produce a mediocre result. Professor Ethan Mollick has compared such programs to a tireless but clumsy intern — eager to please and lightning fast, but lacking polish, sophistication, or accountability to reality. So it is natural that the courts’ first priority is to put out the fire of fake case citations.

But one should not confuse legitimate concerns about flawed AI today with pessimism about the transformative power of AI going forward. AI’s large language models are learning very quickly, and a rapid influx of users will spur even more rapid development. Legal research services like Lexis/Nexis and Westlaw have introduced their own first-generation AI services, which aim to connect more rigorously to actual legal precedent. It will not be long before legal research is indeed faster, better, and more thorough than ever before — a change akin to the introduction of electronic legal databases in the 1980s.

The courts, too, are not too far off from embracing AI for their own purposes. In Phase II, judges and court staff will rely on AI to read briefs and transcripts, summarize arguments, check citations, and even produce questions for oral argument. In Phase III, they will use AI to draft opinions and orders, initially in low-stakes cases (to help with the workload) but eventually in high-stakes, complex litigation. In Phase IV, AI itself will hear the case, render the decision, and draft an order or opinion.

Phase IV may feel futuristic, but it is coming, and sooner than we think. State courts in particular are contending with a massive increase in self-represented litigants — individuals who have real legal problems but who cannot (or choose not to) pay a lawyer to help guide them through the system. Many have cases that are legally straightforward (such as a basic contract dispute) and may be willing to submit those cases to an AI “judge” with the promise of a quicker and less expensive resolution. As AI improves, such judging programs eventually will be available on demand and from the comfort of one’s own home, no different that the modern telehealth industry. They may start as private, ADR-style offerings that compete with courts for customers, but eventually court systems themselves will feel pressure to embrace the same technology.

Of course, courts will not move into AI judging lightly, and nothing will happen until the courts are convinced that whatever system they employ can guarantee an accurate application of existing law, preserve the guarantees of due process, and protect confidentiality as needed. But we are on cusp of a major technological transformation that could benefit resource-starved courts and decision-starved parties in equal measure.

Leave a comment