Skip to main content
SearchLoginLogin or Signup

AI Safety Is a Narrative Problem

Published onJun 05, 2024
AI Safety Is a Narrative Problem
·

Abstract

This op-ed explores power and narrative dynamics around AI. Drawing on pop-culture references, the professional experiences of the author and examples from 2023’s “Great AI Safety Hype Roadshow,” this piece draws on the literary criticism technique of practical criticism to consider how speeches and announcements from both Silicon Valley executives and research scientists to interrogate the media-friendly nature of p(doom) discourse—which focuses on the existential risks of AI (PauseAI, 2023)—and its likely consequences. The complexities of AI and its numerous social impacts can be difficult for even the most expert analyst to unpack. In spite of this, the potential of “existential threats” has successfully cut through to become a mainstay of mainstream media coverage over the last year. This piece will make the case that this is an effective narrative conceit that has achieved a number of ends that traditional science communication tends to find difficult, if not impossible, to achieve. Firstly, it is easy to understand. Simplification of this nature—that removes jargon and complexity and focuses on a single outcome—is much easier to fit on a TV rolling news ticker or on the cover of a tabloid newspaper than more well-balanced, representative opinions. Secondly, it inherits prior assumptions from well-known dramatic forms. P(doom) plays to stories familiar from Greek tragedy through to Marvel movies, in which lone male heroes battle ineluctable forces. Thirdly, it is imbued with urgency and so becomes difficult to ignore.

Keywords: technology, power, AI, narrative, media literacy


In May 2023 I was invited to be in the audience for a talk given by Geoffrey Hinton, the computer scientist known as one of the “godfathers of AI.” Afterwards I went along to a (mostly) academic dinner. In the slightly awkward period at the beginning when we were milling around, I started chatting to a very eminent computer scientist who had retired some years ago, having mostly—I gather—worked in academic labs.

By way of making conversation, the Very Eminent Computer Scientist asked me what I “do.” This is an oddly complicated question at the best of times because I’m a chronic overthinker, don’t have a convenient Richard Scarry job title, and I do a few different things. I’m also an anomaly because I’ve been working in technology for almost 30 years without ever doing the sorts of things that generally equate to status: I’m not an academic, I’ve never worked at Google, and I’m a very long way from being fantastically rich. However, when people make polite conversation before dinner, they don’t want a whole spiel about how, actually, you work in an emerging field in which job titles are not yet fully formed and your practice is angled toward an inclusive feminist vision of digital technologies because, well, life is short and people are hungry. So, instead, I said, “I help people understand how technologies work in the world” and dropped the vague ‘tech ethics’ catchall, and he replied, “How interesting, and, may I ask, what qualifies you to do that?”

Being asked by a professor “what qualifies you to do that?” while standing in the Senior Common Room of a Cambridge college is quite daunting. In response, I emitted a vaguely incoherent freefall of word association—the kind you kick yourself about while replaying a slew of potential pithy aperçus. I explained that I’d spent 20 years making and commissioning digital products and services, some of them used by millions of people, and so my practice was based on observing what happens to a technology when it goes into the world: how it’s adapted and changed, and how every technology is really unfinished until it’s used by people. Mercifully, at that moment, we were ushered into dinner, but “what qualifies you to do that” stuck with me, and I wished I’d had a better answer.

The question of ‘what qualifies you’ to understand a technology has become particularly relevant over the past year, as Sam Altman’s AI Hype Roadshow has rolled through both social and traditional media, accompanied by a cavalcade of AI doomspeak from World-Leading Authorities. This has turned the term ‘AI’ into a compelling vehicle for a range of as-yet imaginary concepts, all rooted in the perceived—and perhaps mythical—potential of artificial general intelligence (AGI).

This is despite the fact that there is no consensus in the AI community as to whether AGI can or will be achieved; there is, however, a cadre of investors, CEOs, and technologists who purport to believe its achievement is simultaneously inevitable and extremely dangerous. Confusingly, some of the most pessimistic commentators have themselves been involved in the push to create AGI—and in spring 2023, after the launch of GPT-4, they also became quite keen on open letters.

The first open letter was from the Future of Life Institute (2023), asking for a pause on AI development due to the potential emergence of risks including loss of work, human replacement, and “loss of control of civilization.” Altman and OpenAI colleagues (Altman et al., 2023) did not pause, however, and instead rebranded AGI as “superintelligence,” a marketing term presumably meant to throw shade on the ‘normal intelligence’ found in humans, and in a short blog post they outlined the need for global governance to contain the might of these as-yet undeveloped technologies. This anxiety was escalated a few days later when an even shorter statement was published by the Center for AI Safety (2023) declaring “the risk of extinction from AI should be a global priority.” Over the following months, the list of potential existential risks was expanded to include threats to biosecurity and cyberattacks, but few if any of these scenarios were worked through with explanations of how they might come to pass or justifications as to why their creators would make things they could not control. Instead, they were presented as sci-fi risks, with their creators cast in the kind of powerful-yet-helpless light familiar from Greek tragedy.

As such, the ability to understand AI, or indeed any technology, is not essential to interrogate this chain of events, which has been exercises in narrative framing and control. The project of Altman and his merry band of doomsayers appears to be to capture power and create obfuscation by making new myths and legends—equating the role of computer scientists and technology executives with Superman, weeping over Lois Lane before he turns back the world, or Oedipus, distraught after murdering his father. If there has been a teachable moment, then the lesson has not been one about the potential of technologies but about the importance of media literacy.

This is by no means a new move. It just happens—this time—to have been astonishingly effective. For several decades, tech companies have been aware that political influence is as important as technological innovation in shaping future market opportunities. From tactical advertising to political lobbying to well-paid public policy jobs that have improved the bank balances of many former politicians and political advisers, the importance of landing compelling political stories has played a critical role in creating, expanding, and maintaining these incredibly lucrative markets.

The current ‘existential threat’ framing is effective because it fits on a rolling news ticker, diverts attention from the harms being created right now by data-driven and automated technologies, and confers huge and unknowable potential power on those involved in creating those technologies. If these technologies are unworldly, godlike, and unknowable, then the people who created them must be more than gods; their quasi-divinity transporting them into state rooms and onto newspaper front pages without need to offer so much as a single piece of compelling evidence for their astonishing claims. This grandiosity makes the hubris of the first page of Stewart Brand’s Whole Earth Catalog seem rather tame, and it assumes that no one will pull back the curtain and expose it as a market-expansion strategy rather than a moment of redemption. No one will ask what the words really mean, because they don’t want to look like they don’t really understand.

And yet, in reality, all of this myth-making and rhetorical bluster is a just a narrative trick: the hidden object is not a technology, but a bid for power. This is a plot twist familiar from Greek myths, cautionary tales, and superhero stories, and it’s extremely compelling for journalists because most technology news is boring as hell. Altman’s current line is roughly, ‘please regulate me now because I’m not responsible for how powerful I’m going to turn out to be—and, oh, let’s just skip over all the current copyright abuses and potentially lethal misinformation because that’s obvs small fry compared to when I accidentally abolish humanity.’ If it reminds me of anything, it’s the cartoon villain Dr. Heinz Doofenshmirtz from Phineas and Ferb, who makes regular outlandish claims before trying, and failing, to take control of the Tri-State Area. The difference is, of course, that Phineas and Ferb always frustrate his plan.

My point is not so much that we need Phineas and Ferb to come and sort this all out, but that we need to stop normalizing credulity when people with power and money and fancy titles say extraordinary things. When I went to Hinton’s Q&A in Cambridge this past summer, he spoke with ease and expertise about neural nets, but admitted he knows little about politics or regulation or people beyond computer labs. These last points garnered several laughs from the audience, but they weren’t really funny; they spoke to a yawning gap in the way that technology is understood, spoken about, and covered in the media.

Computer science is a complex discipline, and those who excel at it are rightly lauded, but so is understanding and critiquing power and holding it to account. Understanding technologies requires also understanding power; it requires social, cultural, political, and media literacy as well as technical literacy; incisive questioning and sober critique as well as shock and awe. And while algorithmic audits are important for technological transparency, narrative analysis is vital for understanding power—particularly when there is such a striking deficit between actions and words.

Unusually for someone who occasionally attends computer science dinners at a Cambridge college, my undergraduate degree is in English literature. Not far from where I heard Hinton lecture, I had once spent many hours engaged in the form of close reading known as practical criticism. ‘Prac crit’ involves analyzing the form and content of anonymous, undated texts, identifying any formal and stylistic techniques, and situating the extract in a possible historical context and literary tradition.

Every student of English literature knows that the Western canon is full of the words of grandiose men staking claims to power and importance, and that the dramatic fate of gods upon the earth is a theme that both tragedians and Hollywood screenwriters have returned to again and again. For all the talk of innovation, the rhetorical tropes of Altman and Elon Musk have existed since the time men were making speeches in the agora and actors were performing in the Theatre of Dionysus. The hubris of modern technology CEOs is the drama of our time, and it plays out in viral snippets and social media posts that travel out of context into headlines and memes, stoking our deepest fears.

After all, the ability to feel fear in a volatile and uncertain world is universal. The threats presented by the (p)doomers (PauseAI, 2023) do not resonate because of any technology; they resonate because they speak to people’s deepest fears—what is left if we lose health, work, security, and control?

If there is an existential threat posed by OpenAI and other technology companies, it is the threat of a few individuals shaping markets and societies for their own benefit. Elite corporate capture is the real existential risk, but it looks much less exciting in a headline than the catastrophic takeoff of AI “superintelligence.”


Disclosure Statement

Rachel Coldicutt has no financial or non-financial disclosures to share for this article.


References

Altman, S., Brockman, G., Sutskever, I., (2023, May 22). Governance of superintelligence. OpenAI. https://openai.com/blog/governance-of-superintelligence

Center for AI Safety. (2023, May 28). Statement on AI risk. https://www.safe.ai/statement-on-ai-risk

Future of Life Institute. (2023, March 22). Pause giant AI experiments: An open letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/

PauseAI. (2023, December 18). List of p(doom) values. https://pauseai.info/pdoom


©2024 Rachel Coldicutt. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.

Comments
0
comment
No comments here
Why not start the discussion?