Skip to main content
SearchLoginLogin or Signup

Government Interventions to Avert Future Catastrophic AI Risks

Forthcoming. Now Available: Just Accepted Version.
Published onApr 15, 2024
Government Interventions to Avert Future Catastrophic AI Risks


This essay is a revised transcription of Yoshua Bengio's July 2023 testimony in front of the US Senate Subcommittee on Privacy, Technology, and the Law meeting on the topic of oversight of AI. It argues for caution and government interventions in regulation and research investments to mitigate the potentially catastrophic outcomes from future advances in AI as the technology approaches human-level cognitive abilities. It summarizes the trends in advancing capabilities and the uncertain timeline to these future advances, as well as the different types of catastrophic scenarios that could follow, including both intentional and unintentional cases, misuse by bad actors and intentional as well as unintended loss of control of powerful AIs. It makes public policy recommendations that include national regulation, international agreements, public research investments in AI safety as well as classified research investments to design aligned AI systems that can safely protect us from bad actors and uncontrolled dangerous AI systems. It highlights the need for strong democratic governance processes to control the safety and ethical use of future powerful AI systems, whether they are in private hands or under government authority.

Keywords: artificial intelligence, AI safety, AI alignment, AI regulation, AI public policy, AI countermeasures

04/15/2024: To preview this content, click below for the Just Accepted version of the article. This peer-reviewed version has been accepted for its content and is currently being copyedited to conform with HDSR’s style and formatting requirements.

©2024 Yoshua Bengio. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.

No comments here
Why not start the discussion?