•  
  •  
 

Publication Date

10-2025

Document Type

Article

First Page

933

Abstract

Artificial intelligence (AI) violates procedural due process rights if the government uses it to deprive people of life, liberty, and property without adequate notice or an opportunity to be heard. A wide range of government agencies deploy AI systems, including in courts, law enforcement, public benefits administration, and national security. If the government refuses to disclose the reasons why it denied a person bail, public benefits, or immigration status, serious due process concerns arise. If the government delegates such tasks to an AI system, the due process analysis does not change. One asks whether a person received adequate notice and an opportunity to be heard. And further, where applicable, one must ask whether the risk of error and the costs to rights justify not using interpretable and adequately tested AI.

Nor is it necessary for AI or other automated systems to operate in a “black box” manner without providing people with notice or a way to meaningfully contest decisions. There is a ready alternative: “glass box” or interpretable AI systems present results so that users know what factors the system relied on, what weight the system gave to each, and the strengths and limitations of the associations or predictions made. Whether it is a criminal investigation or a public benefits eligibility determination, interpretable AI can ensure that people have notice and can challenge any error, using the procedures available. And such a system can be more readily checked for errors. Due process demands a more robust opportunity to contest government decisions that raise greater reliability concerns. We need to know how reliably an AI system performs under realistic conditions to assess the risk of error.

Longstanding due process protections and well-developed interpretable AI approaches can ensure that AI systems safeguard due process rights. Conversely, due process rights have little meaning if the government uses “black box” systems that are not fully interpretable or tested for reliability and, as a result, cannot comply with procedural due process requirements. So far, there has been little government self-regulation of AI. In response, judges have begun to enforce existing due process rights when AI or other automated decision-making processes are used. As judges address due process challenges to AI, they should consider the interpretability and reliability of AI systems. Similarly, as lawmakers and regulators examine the government’s use of AI systems, they should ensure safeguards, including interpretability and reliability, to protect our due process rights in an increasingly AI-dominated world.

Share

COinS