Federal Advisory Committee considers impact of AI on evidentiary rules

The federal Advisory Committee on Evidence Rules has begun a very preliminary conversation on how artificial intelligence will impact the reliability and authentication of evidence. The committee met with experts in April and has just begun considering whether new rules will be needed to address AI-related concerns. Among the more prominent issues are (1) how to address allegations that proferred evidence is an AI-generated “deepfake” and (2) what the proper test should be for validating mechine learning outputs.

A good summary of the committee’s progress can be found here. The full minutes of their discussion can be found here (starting at page 108). 

This is somewhat reminiscent of the work of a parallel federal court committee, the Advisory Committee on Civil Rules, to address the discovery of electronically stored information (ESI) two decades ago. That committee eventually landed on a package of amendments designed to address the unique chellanges of producing ESI in civil discovery. But it was not an easy road: by the time the new rules went into effect in 2006, individual judges had starting crafting their own approaches to deal with the cases already in front of them. And just a few years later, the technological landscape had changed sufficiently that additional amendments were needed. One should therefore expect the Advisory Committee on Evidence Rules to proceed cautiously, even as AI’s transformation of the social and business landscape proceeds apace.

Leave a comment