Alaska Chief Justice pledges to speed up criminal cases, use AI for estate cases

In her State of the Judiciary Speech this week, Alaska Chief Justice Susan Carney acknowledged unacceptable delays in the court system’s processing of criminal cases, and vowed to speed up processing times. A media investigation earlier in the year found that the length of time needed to try the most serious felony cases in Alaska had tripled in the past decade.

Chief Justice Carney also noted efforts to improve the civil justice system, including in areas of family law and estate administration. The court system will be employing a generative AI chatbot to help people navigate the often arcane rules of estate processing after a loved one’s death.

This seems like an excellent use of AI (assuming, as always, that it provides accurate and reliable information). It can help ordinary people understand their obligations in handling an estate at lower cost and less time. I imagine that many court systems will look to implement this type of AI technology in the near term.

Indiana courts find interesting new applications for technology

The beginning of each calendar year is the prime time for State of the Judiciary addresses, an opportunity for each state’s Chief Justice to personally address legislators, request needed resources, and champion the court system’s accomplishments.

This year, Indiana Chief Justice Loretta Rush highlighted some fascinating technological developments in her court system. One involved a pilot project that uses AI to generate transcripts in mental health commitment cases. Transcripts are now available in minutes rather than months. This is critical because many commitment decisions are appealed, and in the ordinary case transcripts take so long to generate that the appeal cannot be heard until the period of commitment has passed, effectively denying a party the right of appeal. The new technology expedites the entire process and adds a meaningful appeal option in these difficult cases.

The second development is the creation of an integrated system for sharing data on the statewide jail population. Indiana currently has 20 different jail management software systems, which were not necessarily able to talk to each other. (This sounds incredible, but given the long history of local courts being tied to their county systems rather a statewide court management system, it’s still not all that surprising.) The new system will allow the sharing of critical information, including fingerprint data.

The Indiana legislature will have to fully fund the jail software to the tune of $3 million, and has not committed to it yet. But the developments are interesting and noteworthy, and seemingly highly beneficial for both court administration and public safety.

Will the OpenAI case put pressure on US courts to resolve internet jurisdiction?

Artificial intelligence behemoth OpenAI is currently defending a lawsuit in India, brought by that country’s domestic news agency ANI. The primary allegation is that OpenAI improperly used ANI’s copyrighted material to train its generative AI programs.

Open AI has raised a number of defenses, including that the courts of India have no personal jurisdiction over it. As every first-year law student learns, courts must have personal jurisdiction over a defendant before they can issue any binding order. For centuries, personal jurisdiction required that the defendant be physically present where the court was located. However, as 20th-century advances in transportation and communciation made it easier for people to cross state and national boundaries, courts adjusted the doctrine. It is now widely recognized that someone who enters a state or foreign country (even virtually) and causes mischief can be subject to that state or country’s jurisdiction, even if the defendant is not physically located there.

But there are still limits. The United States Supreme Court has insisted that a defendant must “purposefully avail” itself of the state where the lawsuit is filed, meaning that it must engage with the state in some intentional and deliberate way. An accidental or unforeseen connection to the forum will not do.

And thus human interaction through the internet–so wide-ranging and ubiquitous in modern life–poses a problem. An e-commerce giant like Amazon or eBay might be said to purposefully avail itself of a forum by offering goods for sale in that forum through the internet. The interaction is knowing, willful, and intentional, and the case for jurisdiction is easy. But what about a third-party seller who puts a product on eBay without thinking about a particular market or location? Is that purposeful availment? Or what if someone posts allegedly infringing or defamatory material on social media or a blog? Is that person subject to personal jurisdiction anywhere the site can be accessed?

The U.S. Supreme Court has never answered that question, at least not directly. It seems to want to answer the question, if the Justices’ questions during oral argument for other personal jurisdiction cases are any indication. But the Court seems unable to articulate a coherent and workable set of jurisdictional rules for the internet, and instead keeps deferring the issue. (Meanwhile, lower courts in the United States are doing the best they can to articulate meaningful principles of internet jurisdiction, with a common approach being to allow the exercise of jurisdiction when the defendant “directed electronic activity into a forum” with the “manifest intent of engaging with persons in that forum.” That captures the Amazons of the world who know where they are selling and shipping products, but probably not the ordinary Instagrammer who just posts something online.)

But the Supreme Court may not be able to wait much longer. The outcome of the OpenAI case in India may force its hand, or at least put greater pressure on it to reach a resolution applicable to American courts.

Continue reading “Will the OpenAI case put pressure on US courts to resolve internet jurisdiction?”

Illinois Supreme Court issues policy on use of generative AI

The Illinois Supreme Court has issued a policy governing the use of generative AI. The policy is consistent with the ABA’s Formal Opinion on AI that came out last summer. Unsurprisingly, the Illinois policy extends an attorney’s ordinary ethical obligations to the use of generative AI, holding lawyers accountable for understanding how the technology works, as well as checking for errors and hallucinations, before filing anything with the court.

Colorado judges discuss the pros and cons of AI

This is an interesting article on a recent panel discussion in Colorado, in which state and federal judges shared the courts’ emerging views on generative AI with the rest of the legal community. It is clear that, like the rest of us, courts are struggling to achieve the right balance between AI as an impermissible shortcut and AI as an efficient game-changer.

And AI can absolutely be that game-changer for written materials. Current iterations of AI tend to write in a dull and wooden style, at least for legal work. But short motions and briefs can be drafted in a matter of seconds (and polished within minutes), rather than taking hours to draft and revise. And the output is grammatically correct and readable, which is a huge plus. Thoughtful use of AI in written submissions might alleviate the problems that stem from the notable decline in younger lawyer’s writing skills.

It seems that we are headed in the direction of treating AI like a paralegal or inexperienced attorney — eventually its use will be explicitly permitted, but failure to confirm all the details will be an ethical violation in itself. Stay tuned.

State courts explore using AI for behind-the-scenes HR work

Most news about the use of AI in the legal world tends to focus on ethical slipups like relying on ChatGPT to draft briefs or do legal research. But behind the headlines, courts and law firms are becoming incresingly proficient with using generative AI to perform routine administrative and bureaucratic tasks. A good example is the use of AI to streamline human resources work for the courts. In a recent webinar hosted by the National Center for State Courts and Thompson Reuters, participants pointed out that among other things, HR managers can employ AI to more quickly craft job descriptions and performance reviews.

Of course, AI is still a new and somewhat unpredictable technology, and there are real concerns about hallucination, infringement of intellectual property, and exposure of confidential information. But the technology is rapidly improving and meaningful protocols will be in place soon enough. Court and law firm administrators would do well to see AI as another potentially time-saving tool in the tool kit, no different from word processing software or copy machines in earlier generations.

The courts are still in Phase I of their relationship with AI — but change is coming

This week, the Colorado Court of Appeals issued its first opinion cautioning litigants about relying on generative AI to draft legal briefs, joining a number of other courts that have similarly warned (and sometimes sanctioned) parties and lawyers for including “bogus” AI-generated case citations. 

Judicial pushback against the errors caused by ChatGPT and other early publicly available AI models is sound policy, no different than teachers balking at AI-generated student essays. The AI programs currently available to the public can be astonishing in their creativity, but are also prone to hallucination and more often than not produce a mediocre result. Professor Ethan Mollick has compared such programs to a tireless but clumsy intern — eager to please and lightning fast, but lacking polish, sophistication, or accountability to reality. So it is natural that the courts’ first priority is to put out the fire of fake case citations.

But one should not confuse legitimate concerns about flawed AI today with pessimism about the transformative power of AI going forward. AI’s large language models are learning very quickly, and a rapid influx of users will spur even more rapid development. Legal research services like Lexis/Nexis and Westlaw have introduced their own first-generation AI services, which aim to connect more rigorously to actual legal precedent. It will not be long before legal research is indeed faster, better, and more thorough than ever before — a change akin to the introduction of electronic legal databases in the 1980s.

The courts, too, are not too far off from embracing AI for their own purposes. In Phase II, judges and court staff will rely on AI to read briefs and transcripts, summarize arguments, check citations, and even produce questions for oral argument. In Phase III, they will use AI to draft opinions and orders, initially in low-stakes cases (to help with the workload) but eventually in high-stakes, complex litigation. In Phase IV, AI itself will hear the case, render the decision, and draft an order or opinion.

Phase IV may feel futuristic, but it is coming, and sooner than we think. State courts in particular are contending with a massive increase in self-represented litigants — individuals who have real legal problems but who cannot (or choose not to) pay a lawyer to help guide them through the system. Many have cases that are legally straightforward (such as a basic contract dispute) and may be willing to submit those cases to an AI “judge” with the promise of a quicker and less expensive resolution. As AI improves, such judging programs eventually will be available on demand and from the comfort of one’s own home, no different that the modern telehealth industry. They may start as private, ADR-style offerings that compete with courts for customers, but eventually court systems themselves will feel pressure to embrace the same technology.

Of course, courts will not move into AI judging lightly, and nothing will happen until the courts are convinced that whatever system they employ can guarantee an accurate application of existing law, preserve the guarantees of due process, and protect confidentiality as needed. But we are on cusp of a major technological transformation that could benefit resource-starved courts and decision-starved parties in equal measure.

Federal Advisory Committee considers impact of AI on evidentiary rules

The federal Advisory Committee on Evidence Rules has begun a very preliminary conversation on how artificial intelligence will impact the reliability and authentication of evidence. The committee met with experts in April and has just begun considering whether new rules will be needed to address AI-related concerns. Among the more prominent issues are (1) how to address allegations that proferred evidence is an AI-generated “deepfake” and (2) what the proper test should be for validating mechine learning outputs.

A good summary of the committee’s progress can be found here. The full minutes of their discussion can be found here (starting at page 108). 

This is somewhat reminiscent of the work of a parallel federal court committee, the Advisory Committee on Civil Rules, to address the discovery of electronically stored information (ESI) two decades ago. That committee eventually landed on a package of amendments designed to address the unique chellanges of producing ESI in civil discovery. But it was not an easy road: by the time the new rules went into effect in 2006, individual judges had starting crafting their own approaches to deal with the cases already in front of them. And just a few years later, the technological landscape had changed sufficiently that additional amendments were needed. One should therefore expect the Advisory Committee on Evidence Rules to proceed cautiously, even as AI’s transformation of the social and business landscape proceeds apace.

Videoconferencing as a (temporary) solution to the lack of court interpreters

This is an interesting article about the shortage of interpreters in the South Dakota court system. The state only has about 80 qualified interpreters, and only a fraction of them speak the primary languages of non-English-speaking litigants: Spanish, Arabic, Swahili, and Dinka.

The lack of qualified interpreters presents a serious access to justice issue. It can delay cases or even corrupt proceedings if the interpreter translates incorrectly. Moreover, interpreters must have familiarity with the technical language of court proceedings in order to be effective.

The article suggests one technology-based solution: bringing in interpreters remotely through videoconferencing. This approach has its own challenges, including technical glitches and lag time, but it may be the best available response at the moment. Still, it only seems a matter of time before high-quality, reliable AI could be used for simultaneous courtroom translation.