Document Type
Article
Publication Date
6-5-2024
Abstract
One of the most commonly recommended policy interventions with respect to algorithms in general and artificial intelligence ("AI") systems in particular is the need for greater transparency, often focusing on the disclosure of the variables employed by the algorithm and the weights given to those variables. This Essay argues that any meaningful transparency regime must provide information on other critical dimensions as well. For example, any transparency regime must also include key information about the data on which the algorithm was trained, including its source, scope, quality, and inner correlations, subject to constraints imposed by copyright, privacy, and cybersecurity law. Disclosures about prerelease testing also play a critical role in understanding an AI system's robustness and its susceptibility to specification gaming. Finally, the fact that AI, like all complex systems, tends to exhibit emergent phenomena, such as proxy discrimination, interactions among multiple agents, the impact of adverse environments, and the well-known tendency of generative AI to hallucinate, makes ongoing post-release evaluation a critical component of any system of AI transparency.
Keywords
artificial intelligence, AI system, policies, algorithms, algorithmic transparency, algorithmic disclosure, algorithmic discrimination, bias, training data
Publication Title
Columbia Science and Technology Law Review [online]
Repository Citation
Yoo, Christopher S., "Beyond Algorithmic Disclosure For AI" (2024). Articles. 426.
https://scholarship.law.upenn.edu/faculty_articles/426
https://doi.org/10.52214/stlr.v25i2.12766
DOI
https://doi.org/10.52214/stlr.v25i2.12766
Publication Citation
25 Colum. Sci. & Tech. L. Rev. 314 (2024)