Skip to main content
SearchLoginLogin or Signup

Large Language Models: Trust and Regulation

An interview with Costanza Bosone, Bob Carpenter, Tarak Shah, and Claudia Shi by David Banks
Published onJul 31, 2024
Large Language Models: Trust and Regulation
·
history

You're viewing an older Release (#1) of this Pub.

  • This Release (#1) was created on Jul 23, 2024 ()
  • The latest Release (#2) was created on Aug 01, 2024 ().

Abstract

Large language models have burst onto the scene, and may do much to change the way the world operates.  They can write, illustrate, and are rapidly adding new capabilities.  This panel discussion brings together a group of experts to discuss ways in these tools might evolve, particularly in the context of how trustworthy they are and the issues surrounding their regulation.

Keywords: disinformation, GPT, human rights, job loss, OpenAI, trust



07/23/2024: To preview this content, click below for the Just Accepted version of the article. This peer-reviewed version has been accepted for its content and is currently being copyedited to conform with HDSR’s style and formatting requirements.


©2024 David Banks, Costanza Bosone, Bob Carpenter, Tarak Shah, and Claudia Shi. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.

Comments
0
comment
No comments here
Why not start the discussion?