Publication Date
10-2025
Document Type
Article
First Page
977
Abstract
The increasing deployment of generative artificial intelligence (AI) in law enforcement raises pressing theoretical and normative questions, particularly regarding its potential to infect a critical aspect of a police investigation, the custodial interrogation. A central issue is whether, and to what extent, police may lawfully employ generative AI tools to induce confessions from suspects during custodial questioning.
Historically, courts have permitted law enforcement officers to use certain deceptive tactics during interrogations without running afoul of constitutional protections. Permissible forms of deception have included the presentation of falsified forensic evidence, fabricated witness statements, and pretend polygraph results. These methods, though controversial, have long been accepted as part of the strategic repertoire of law enforcement, on the premise that they do not overbear the suspect’s will to a constitutionally impermissible degree.
However, the advent of generative AI introduces new dimensions of manipulation that may significantly intensify the inherent compulsion of a custodial interrogation. AI-generated "deepfakes"—synthetic audio, video, or images that appear convincingly authentic—can be used to fabricate incriminating evidence with unprecedented realism. Imagine a scenario in which police confront a suspect with a deepfaked video depicting a purported eyewitness who falsely claims to have observed the suspect committing the crime, or a video of an “accomplice” confessing and implicating the suspect. Such fabrications, made possible by AI, could profoundly distort a suspect’s perception of the evidence against them, potentially exceeding the bounds of due process and exacerbating the potential for false confessions.
Beyond the presentation of false evidence, generative AI may also enable real-time interrogation enhancements. Advanced models may soon analyze inputs, such as a suspect’s facial expressions, vocal tone, posture, and demographic characteristics, and generate immediate feedback or strategy suggestions for interrogators. These AI-driven analytics could guide law enforcement toward more psychologically effective, and possibly more coercive, techniques tailored to the individual suspect. The potential use of such data-driven, adaptive interrogation strategies raises serious concerns under established voluntariness jurisprudence.
To address the unprecedented risks posed by generative AI in the interrogation context, courts should adopt a clear and administrable per se rule: that the use of fabricated evidence generated by AI to elicit a confession renders that confession involuntary. As generative AI enables increasingly persuasive and deceptive forms of evidence fabrication, a categorical prohibition on its use in custodial interrogations is both doctrinally sound and normatively imperative. This approach would preserve the integrity of the criminal justice system and ensure that constitutional protections remain robust in the face of rapidly advancing digital manipulation.
Repository Citation
Hillary
B.
Farber,
&
Anoo
D.
Vyas,
Truth and Technology: Deepfakes in Law Enforcement Interrogations,
27
U. Pa. J. Const. L.
977
(2025).
Available at:
https://scholarship.law.upenn.edu/jcl/vol27/iss5/2