The rise of generative AI confronts us with new and key questions about AI failure, and how we make sense of and learn how to coexist with it. Whilst computer scientists understand AI failure as something that we can learn from and predict (Amodei et al., 2016; Yampolskiy 2018, p. 142), in this article I argue that we need to understand AI failure as a complex social reality that is defined by the interconnection between our data, technological design, and structural inequalities (Benjamin, 2019; Broussard, 2023) by processes of commodification (Appadurai & Alexander, 2019) and by everyday political and social conflicts (Aradau & Blanke, 2021). Yet I also show that to make sense of the complexity of AI failure we need a theory of AI errors. Bringing philosophical approaches to error theory together with anthropological perspectives, I argue that a theory of error is essential because it sheds light on the fact that the failures in our systems derive from processes of erroneous knowledge production, from mischaracterisations and flawed cognitive relations. A theory of AI errors, therefore, ultimately confronts us with the question about what types of cognitive relations and judgements define our AI systems, and sheds light on their deep-seeded limitations when it comes to making sense of our social worlds and human life.
Keywords: AI failure, AI hallucination, theory of errors, foundational models, anthropology
09/03/2024: To preview this content, click below for the Just Accepted version of the article. This peer-reviewed version has been accepted for its content and is currently being copyedited to conform with HDSR’s style and formatting requirements.
©2024 Veronica Barassi. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.