Skip to main content
SearchLoginLogin or Signup

‘Frontier AI,’ Power, and the Public Interest: Who Benefits, Who Decides?

Forthcoming. Now Available: Just Accepted Version.
Published onJun 17, 2024
‘Frontier AI,’ Power, and the Public Interest: Who Benefits, Who Decides?


As the rapid industrialization of generative AI (GenAI) reached a crescendo in the fall of 2023, a series of international AI policy initiatives, like the UK AI Safety Summit and G7’s Hiroshima Process, cropped up as a response to corresponding global AI governance challenges. The policymakers and government officials, who drove these initiatives, emphasized that the rise of ‘frontier AI’ technologies was bringing humankind to a historical inflection point—placing humanity at a crossroads and situating the present generation at an axis of choice which would determine whether the evolution of AI innovation moves toward the exponential advancement of the public good or toward possibilities for potentially irreparable harm to people, society, and the planet. And yet, despite the inflationary rhetoric of historical transformation around which these policy initiatives were framed, their actual results (high-level voluntary commitments, non-binding codes of conduct, the formation of light-touch national AI safety institutes, etc.) seem vastly out of sync with the scope and scale of the problem to which such initiatives claimed to respond.

In this paper we argue that, if anything, this framing of the GenAI moment as a historical pivot point raises fundamental questions about the thorny relationship of ‘frontier AI,’ power, and the public interest, namely: Who actually has their hands on the wheel? Who defines the agenda of ‘frontier AI’ innovation? Who controls the means of producing it, and thus ultimately its influence on humanity’s broader fate? These questions cut much deeper than those surrounding the risks arising from unforeseen advances in ‘frontier AI’ capability or those around pre-deployment safety testing that took up much of the oxygen at the UK AI Safety Summit and related international AI policy discussions. They have to do with who possesses agenda-setting power, who decides on when, where, and how these technologies are developed and deployed and who stands to benefit from or be harmed by them. They also raise further questions about how affected members of society can harness this control over the direction of AI to serve the common good and thereby exercise agency over the trajectories of their own collective futures. We will claim that, ultimately, an effective response to these questions demands a fundamental rethinking of the broader political economy of AI and of the global innovation ecosystem which drives its forward progress—a rethinking that recasts this technology as a global public utility subject to democratic control, community-led agenda-setting, and society-centered regulation.

Keywords: generative AI, frontier models, AI ethics and governance, public interest, power, public utility

06/03/2024: To preview this content, click below for the Just Accepted version of the article. This peer-reviewed version has been accepted for its content and is currently being copyedited to conform with HDSR’s style and formatting requirements.

©2024 David Leslie, Carolyn Ashurst, Natalia Menéndez González, Frances Griffiths, Smera Jayadeva, Mackenzie Jorgensen, Michael Katell, Shyam Krishna, Doschmund Kwiatkowski, Carolina Iglésias Martins, Sabeehah Mahomed, Carlos Mougan, Shachi Pandit, Mark Richey, Joseph W. Sakshaug, Shannon Vallor, and Luke Vilain. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.

No comments here
Why not start the discussion?