Generative AI and Trial Advocacy: Back to Basics?

  • Market Insight 2025年12月17日 2025年12月17日
  • 北美洲

  • Tech & AI evolution

  • 商事争议

Technological advances in generative AI may lead to a resurgence in the importance of traditional trial advocacy, as video evidence becomes less trustworthy and judges and juries seek assurances of authenticity that only live humans can provide.

The past several years have seen the proliferation, and later refinement, of web-based tools that can take a user-generated text description and generate a short video that fits that description. OpenAI Sora (now Sora 2), Google Veo (now Veo 3.1), Meta Vibes, and Adobe Firefly are all products that generate video using AI, with other competitors surely to follow as the generative AI space matures.

When these tools were first released in late 2024, the immediate reactions were mixed (as with many generative AI products): amazement, but also skepticism. The videos were good—often good enough to look real at first glance—but flaws could be found upon closer examination. But in the past year, several new iterations of those tools have been released, and the output has only improved, to the point where in a recent New York Times quiz that asked readers to identify whether short videos were generated by AI or not, the rate of correct responses hovered around 50% for most videos (with one AI-generated video only being identified as AI by 32% of readers).[1] In other words, for many videos, picking out whether they were generated by AI is no more likely than predicting a coin flip. And, given that these products have been in the marketplace for less than a year, there is every reason to believe that the videos generated by them will only improve.

The development of generative AI video, accessible to anyone with an OpenAI or Google subscription, has broad implications for how society treats information contained in videos.[2] [3]But perhaps nowhere may be more affected than courts trying to adjudicate factual disputes. For years, video has been seen as the gold standard of evidence—unassailable, even by contradictory live testimony, even for the purpose of granting summary judgment.[4] Indeed, a good story, without video to back it up, was often not seen as good enough by today’s jurors, especially when a video would be expected.[5] But what happens when the video itself could be fabricated, and fabricated so well that no expert or computer can detect that it is fake? Video is no longer the gold standard of evidence, and indeed it may be treated as unreliable. We therefore may have come full circle: The means of proof predating video technology—effective advocacy by attorneys and coherent testimony by witnesses—may be the only tools available to fill the credibility gap left by the rise of generative AI.

Historically, introducing video evidence at trial has been simple and straightforward. In federal courts, under Federal Rule of Evidence 901(a), the proponent of a piece of evidence must present sufficient proof to support a finding that the item is what they claim it to be, and with video evidence, usually by testifying that the video accurately portrays what they observed or that the video is otherwise authentic so what it shows is true.[6] Rule 901(b) outlines various methods for establishing authenticity, including testimony from a knowledgeable witness, circumstantial evidence, and descriptions of systems or processes that reliably produce accurate results (typically used in the absence of witness testimony corroborating what is seen on the video, such as for a store security camera recording an overnight burglary).[7] These standards are intentionally flexible, designed to accommodate a wide range of electronic evidence formats.

However, as generative AI tools become more sophisticated and widely accessible, the reliability of these traditional methods of authentication should be increasingly called into question. And even if a video is shown by proper means to be authentic for use as evidence, a jury may not buy it, no matter what the judge and lawyers say. The ability to fabricate convincing images undermines the evidentiary weight of digital visuals, even when they meet the formal requirements of Rule 901. This tension sits at the heart of the current debate over how courts should evaluate digital evidence in the age of AI.

Further complicating matters, the Federal Rules of Evidence—specifically Article X, which governs the contents of writings, recordings, and photographs—provide broad definitions and standards that were designed for more traditional digital formats. Federal Rule of Evidence 1001(1) defines writings and recordings to include magnetic, mechanical, or electronic recordings. Federal Rule of Evidence 1001(3) states that data stored in a computer, when printed or displayed in a readable format and shown to accurately reflect the data, qualifies as an "original." Rule 1001(4) further clarifies that a duplicate, whether created through mechanical or electronic reproduction, is admissible if it accurately reproduces the original. Under Rule 1003, such duplicates are generally admissible unless there is a genuine question about the authenticity of the original or if admitting the duplicate would be unfair.

These provisions were crafted to accommodate the digitization of information and use it as evidence, including photographs stored and reproduced electronically. However, now, the underlying assumption that a digital image accurately reflects reality is no longer guaranteed. The legal framework, while flexible, was not built to anticipate the ease with which synthetic images can be created and passed off as authentic. As a result, courts must now grapple with the possibility that even images meeting the formal criteria for admissibility may be fundamentally unreliable, raising urgent questions about how to establish and preserve evidentiary integrity. A proposal to amend these rules is currently under consideration and is discussed in greater detail below.

The problem of fake evidence is not new to courts. Courts have dealt with allegations over fake evidence for centuries (for example, forged documents). In recent decades, those allegations have expanded to include digitally edited photographs or video, using a program such as Adobe Photoshop. When digital photography first emerged as a replacement for traditional film, courts and attorneys were forced to confront new questions about authenticity, manipulation, and evidentiary reliability. Unlike film negatives, which offered a physical and relatively tamper-resistant record, digital images could be altered with relative ease, raising concerns about whether they could be trusted in court. Yet despite these early doubts, digital photography quickly became the norm, and legal standards adapted accordingly. And courts have figured out how to adjudicate the allegation that the evidence was fake, often with the help of forensic experts.[8]

Around the same time that courts were adapting to the rise of digital evidence in the early 2000s, the United States faced a parallel issue of eroding public trust of the reliability of digital records in another high-stakes arena: presidential voting. The 2000 presidential election exposed deep concerns about the accuracy and transparency of voting systems, particularly in contrast to traditional paper ballots. This resulted in a rise of public distrust of digital voting mechanisms, which threatened the integrity of the democratic process.

In response, many jurisdictions adopted hybrid systems that paired digital voting machines with paper backups, as physical records can be audited and recounted if disputes arose.[9] Today, many people feel more confident marking and submitting a physical ballot. The security of paper-based systems relies on a verifiable chain of custody, including secure storage, careful transport, and human oversight. Ultimately, it was the paper trail that provided the necessary assurance of integrity, reinforcing the idea that digital systems, while efficient, must be anchored by verifiable originals. Today, most polling places have adopted paper components in part to assure public trust in the security and authenticity of their voting.

The departure from purely paper voting to a hybrid of electronic voting mirrors the legal system’s current struggle with AI-generated image evidence. Just as digital voting required a tangible safeguard to maintain public trust, courts today must find ways to validate digital photographs and videos in an era where manipulation is not only possible but increasingly undetectable. For its part, Google has developed an AI “watermark” that is embedded in all of its AI-generated content and is (according to Google) difficult to tamper with.[10] But unless every provider of generative AI services employed similar tamper-proof watermarks, the absence of an AI watermark would not be sufficient to prove that an image or video was not generated by AI. The lesson is clear: digital convenience must be balanced with evidentiary rigor. Whether in elections or litigation, the credibility of digital records depends on the ability to trace, verify, and, when necessary, revert to a trusted source.

The Federal Rules of Evidence, including Rule 901 and Article X, were interpreted to accommodate digital formats, recognizing that images stored electronically could still be authenticated through witness testimony, metadata, and system-generated records. See e.g., State v. Hayden, 90 Wash. App. 100, 950 P.2d 1024 (1998) (holding that digitally enhanced images of latent fingerprints and palm prints were admissible under the Frye standard, as there was no substantial disagreement among qualified experts regarding the reliability of the enhancement techniques or the software used by trained professionals). The Hayden court found “there does not appear to be a significant dispute among qualified experts as to the validity of enhanced digital imaging performed by qualified experts using appropriate software, we conclude that the process is generally accepted in the relevant scientific community.” Id. at 1028. Still, where there is a question over whether a piece of evidence is genuine, the task of determining what weight to put on that evidence most often falls to the finder of fact—it is simply one more item in the evidentiary stew that jurors must consider to reach a verdict.

AI-generated media presents a deeper challenge than previous generations of dubious evidence, however, in that commentators fear that as time goes on, experts will not be able to distinguish AI-generated images from genuine images using computer-based tools. With no expert testimony to rely on, courts and jurors are left with little guidance beyond their own eyes (which, as shown above, can deceive them). If a video authentically portrays events that a person can testify to, the admissibility standards for evidence should be sufficient—if a person comes into court and says that the video shows what they saw or that it was recorded using a reliable method, that video should come into evidence (if it is otherwise admissible).

But where no one can testify to the authenticity of a video, lawyers could be faced with having to prove facts the old-fashioned way—with eyewitness testimony—without reliance (or over-reliance) on video evidence, because judges and jurors don’t trust video evidence like they used to. The only method available to fill the credibility gap, therefore, may be conventional trial advocacy—that is, telling a compelling story and effectively examining witnesses to enhance (or detract from) the credibility of the documentation or pictorial evidence that the jury is also presented with. If a story makes sense, a video backing that story up will effectively support it. But if a story is only held together by dubious reasoning and dodgy witnesses, a video corroborating it may not be enough. The advent of AI generative media is, therefore, a “Back to the Future” moment for trial lawyers, as it may diminish the crutch that video evidence has become, and may place a premium on lawyers that are effective storytellers and advocates, and not play-by-play announcers over a video.

Recognizing these challenges, the US Judicial Conference’s Advisory Committee on Evidence Rules (“Advisory Committee”) has proposed two paths for updating the Federal Rules of Evidence. The first proposal was to amend Rule 901 to establish a specialized authentication process for suspected deepfakes. The second approach, and the Committee’s preferred approach, introduces a new rule, Rule 707, which governs machine-generated evidence by applying expert witness standards to assess reliability. Ultimately, the Advisory Committee chose not to amend Rule 901, with several members favoring a “wait-and-see” approach as to amending Rule 901.[11]

Under Rule 707, AI and other machine-learning evidence offered at trial without an expert witness would be subjected to the same reliability standards as expert witnesses. Such evidence could be admitted only if it: (1) assists the trier of fact, (2) is based on sufficient facts or data, (3) is the product of reliable principles and methods, and (4) reflects a reliable application of the principles and methods to the facts.[12] Rule 707 creates a framework for opposing parties to challenge the reliability of AI-generated evidence by assessing how the producing system operated and how its methods were applied to the specific facts of the case.[13] Notably, a Committee Note clarifies the scope of Rule 707 machine learning to mean “an application of artificial intelligence that is characterized by providing systems the ability to automatically learn and improve on the basis of data or experience, without being explicitly programmed.”[14]

In May 2025, the Advisory Committee voted 8–1 in favor of seeking public comment on the proposed Rule 707.[15] By August, the Committee on Rules of Practice and Procedure of the Judicial Conference of the United States released Rule 707 for public comment, with the period open until February 16, 2026.[16] Critics caution that Rule 707 applies only to evidence that the proponent acknowledges was created by AI, and not to evidence whose authenticity is in dispute. Thus, it does little to help courts avoid deepfakes or other falsified evidence when authenticity is contested.[17] Nevertheless, Rule 707 marks an important first step at the federal level to adapt the rules of evidence to the increasing use in court of AI-generated materials.

In the event that Rule 707 is ultimately adopted, its practical impact could be significant. Certain cases may see an increase in pretrial motions, expert testimony, and evidentiary challenges, driving up both complexity and cost. Lawyers will need to develop new strategies for authenticating digital evidence and countering AI-related objections, while judges will face the task of applying expert witness standards to technologies that evolve rapidly. In the short term, implementing the rule may drive up both the cost and complexity of introducing AI-generated evidence in court. In short, Rule 707 may not resolve every issue posed by generative AI, but it signals a shift toward a future where courts must balance technological innovation with the fundamental need for reliable evidence.

 


[4] See Scott v. Harris, 550 U.S. 372, 380-81 (2007) (holding that where testimony of a party opposing summary judgment was contradicted by video evidence, the court could use the video to grant summary judgment to the movant: “Respondent's version of events is so utterly discredited by the record that no reasonable jury could have believed him. The Court of Appeals should not have relied on such visible fiction; it should have viewed the facts in the light depicted by the videotape”).

[6] Fed. R. Evid. 901(a).

[7] Fed. R. Evid. 901(b). These methods are also the majority rule in state courts. See, e.g., Mooney v. State, 487 Md. 701, 705-06, 321 A.3d 91 (2024).

[8] United States v. Chapman, 804 F.3d 895, 899-901 (7th Cir. 2015) (discussing expert testimony regarding the authenticity of a digitally recorded video where the video’s authenticity was challenged by a criminal defendant).

[9] Hojjati, Dr. Avesta, From Paper to Post: The Most Secure Ways to Vote; “The road ahead: Creating a secure and accessible future for voting” (May 30, 2024) https://www.digicert.com/blog/what-is-the-most-secure-voting-method#:~:text=But%20the%20task%20of%20verifying,fraud%20prevention%20without%20sacrificing%20usability.

[11] Advisory Committee on Evidence Rules, Agenda Book, May 2 2025 Meeting, U.S. Courts (May 2 2025) at 77, https://www.uscourts.gov/sites/default/files/2025-04/2025-05_evidence_rules_committee_agenda_book_final.pdf.

[12] Proposed Fed. R. Evid. 707 on Artificial Intelligence–Generated Evidence, Nat’l L. Rev. 3 (August 21, 2025), https://natlawreview.com/article/new-evidence-rule-707-would-set-standards-ai-generated-courtroom-evidence.

[13] Id.

[14] Advisory Committee on Evidence Rules, Agenda Book, May 2 2025 Meeting, U.S. Courts (May 2 2025) at 199, https://www.uscourts.gov/sites/default/files/2025-04/2025-05_evidence_rules_committee_agenda_book_final.pdf.

[15] Proposed Fed. R. Evid. 707 on Artificial Intelligence–Generated Evidence, Nat’l L. Rev. 3 (August 2021, 2025), https://natlawreview.com/article/new-evidence-rule-707-would-set-standards-ai-generated-courtroom-evidence.

[16] Id.

[17] New AI Evidence Rule Is a Good Start, But More Is Needed, Law360 (August 27, 2025), https://www.law360.com/pulse/articles/2381199/new-ai-evidence-rule-is-a-good-start-but-more-is-needed.


LEGAL NOTICE: This publication is provided for informational purposes only. It is not intended to constitute, and shall not be construed as, the rendering of legal advice or professional services of any kind, nor does it create an attorney-client relationship between Clyde & Co US LLP and the recipient. Nothing herein constitutes the endorsement of any particular case, principle, or proposition. Moreover, in each instance, determination of issues pertaining to insurance bad faith requires an analysis of the relevant facts and circumstances, pleadings, policy language, and law of the involved jurisdiction(s). The contents of this publication neither constitute nor should they be viewed as a substitute for the advice and recommendations of qualified retained counsel.

结束

掌握其礼的最新消息

注册您的邮箱,获取其礼最新消息!