Skip to main content
SearchLoginLogin or Signup

Toward International Cooperation on Foundational AI Models: An Expanded Role for Trade Agreements and International Economic Policy

Published onMay 31, 2024
Toward International Cooperation on Foundational AI Models: An Expanded Role for Trade Agreements and International Economic Policy
·

Abstract

Foundational AI models such as ChatGPT4 will have wide ranging and potentially large scale impacts on economic growth, social flourishing and national security. At the same time these models present risks. International AI governance is needed to support domestic regulation of foundational AI models, to address the spillovers and to develop the standards and compliance mechanisms that can build trust in the global dissemination and us of AI. The World Trade Organization (WTO) includes limited rules that matter for AI, but it in various bilateral and regional trade agreements that rules for AI are most advanced. That said, using trade agreements to developing AI governance is at its infancy. This article outlies what more can be done in trade agreements as well as in less formal international economic forum such as the G7 to develop international cooperation and governance of foundational AI systems.

Keywords: AI trade, LLMs, regulation, international risk


1. Foundational AI Presents New Opportunities for Social and Economic Flourishing, but Also Risks of Harm

The development of artificial intelligence (AI) presents significant opportunities for economic and social flourishing. The release of foundational models such as the large language model (LLM) ChatGPT4 in early 2023 captured the world’s attention, heralding a transformation in our approach to work, communication, scientific research, and diplomacy. According to Goldman Sachs, LLMs could raise global GDP by 7% and lift productivity growth by 1.5% over 10 years. McKinsey found that generative AI such as ChatGPT4 could add $2.6–$4.4 trillion each year over 60 use cases, spanning customer operations, marketing, sales, software engineering, and R&D (McKinsey Digital, 2023). AI is also impacting international trade in various ways, and LLMs bolster this trend. The upsides of AI are significant and achieving them will require developing responsible and trustworthy AI. At the same time, it is critical to address the potential risk of harm not only from conventional AI but also from foundational AI models, which in many cases can either magnify existing AI risks or introduce new ones.

For example, LLMs are trained on data that encodes existing social norms, with all their biases and discrimination. LLMs create risks of information hazards by providing information that is true and can be used to create harm to others, such as how to build a bomb or commit fraud (Bostrom, 2011). A related challenge is preventing LLMs from revealing personal information about an individual that is a risk to privacy. In other cases, LLMs will increase existing risks of harm, such as from misinformation, which is already a problem with online platforms, or increase the incidence and effectiveness of crime. LLMs may also introduce new risks, such as risks of exclusion where LLMs are unavailable in some languages.

2. International Cooperation on AI Is Already Happening in Trade Agreement and International Economic Forums

Many governments are either regulating AI or planning to do so, and the pace of regulation has increased since the release of ChatGPT4. However, regulating AI to maximize the upsides and minimize the risks of harm without stifling innovation will be challenging, particularly for a rapidly evolving technology that is still in its relative infancy. Making AI work for economies and societies will require getting AI governance right. Deeper and more extensive forms of international cooperation can support domestic AI governance efforts in a number of ways. This includes facilitating the exchange of AI governance experiences, which can inform approaches to domestic AI governance; addressing externalities and extraterritorial impacts of domestic AI governance, which can otherwise stifle innovation and reduce opportunities for uptake and use of AI; and finding ways to broaden access globally to the computing power and data needed to develop and train AI models.

Free trade agreements (FTAs), and more recently, digital economy agreements (DEAs) already include commitments that increase access to AI and bolster its governance. These include commitments to cross-border data flows, avoiding data localization requirements, and not requiring access to source code as a condition of market access, all subject to exception provisions that give government the policy space to also pursue other legitimate regulatory goals such as consumer protection and guarding privacy. Some FTAs and DEAs such as the New Zealand–U.K. FTA and the Digital Economy Partnership Agreement among Singapore, Chile, and New Zealand, include AI-specific commitments focused on developing cooperation and alignment, including in areas such as AI standards and mutual recognition agreements.

With AI being a focus of discussions, international economic forums such as the G7 and the U.S.–EU Trade and Technology Council (TTC), the Organisation for Economic Co-operation and Development (OECD), as well as the Forum for Cooperation on Artificial Intelligence (FCAI) jointly led by Brookings and the Center for European Policy Studies as a dialogue among government, industry, and civil society, are important for developing international cooperation in AI. Initiatives to establish international AI standards in global standards development organizations (SDOs) such as the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) are also pivotal in developing international cooperation on AI.

3. But More Is Needed—Where New Trade Commitments Can Support AI Governance

These developments in FTAs, DEAs, and in international economic forums, while an important foundation, need to be developed further to fully address the opportunities and risks from foundational AI models such as LLMs. International economic policy for foundational AI models can use commitments in FTAs and DEAs and outcomes from international economic forums such as the G7 and TTC as mutually reinforcing opportunities for developing international cooperation on AI governance. This can happen as FTAs and DEAs elevate the output from AI-focused forums and standard-setting bodies into trade commitments and develop new commitments as well. FCAI is another forum to explore cutting-edge AI issues.

Table 1 outlines key opportunities and risks from foundational AI models and how an ambitious trade policy can further develop new commitments that would help expand the opportunities of foundational AI models globally and support efforts to address AI risks, including by building on developments in forums such as the G7 and in global SDOs.

Table 1. New commitments in FTAs, DEAs, and for discussion in international economic forums.

Enable artificial intelligence (AI) opportunity

Increase access to AI compute and data

  • Reduce barriers to hardware, data, and access to cloud computing.

Increase access to AI products and services

  • Reduce barriers to AI services and AI-enabled goods.

Support opportunities to develop and use AI globally

  • Commit to a dialogue and work program that identifies opportunities to cooperate on expanding AI access and use in other countries.

Manage AI risks

Discrimination, exclusion, and toxicity

  • Agree to implement appropriate privacy regulations.

  • Commit to internationally recognize AI ethical principles.

  • Develop government procurement commitments to drive responsible and trustworthy AI.

  • Agree to develop mutual recognition agreements related to conformity assessment and AI audits.

  • Include the G7 International Code of Conduct for Advanced AI Systems in trade agreements.

  • Commit to cooperate in developing international AI standards.

  • Include a WTO Technical Barriers to Trade (TBT)-style commitment to base domestic regulation on international AI standards.

  • Agree to share best practices around data governance.

Security and privacy

  • Develop government procurement commitments to drive responsible and trustworthy AI.

  • Include the G7 Code of Conduct for AI in trade agreements.

  • Commit to cooperate in developing international AI standards.

  • Develop a TBT-style commitment to base domestic regulation on international AI standards.

  • Include as a trade commitment the OECD principles on government access to personal data.

  • Agree to share best practices around AI governance.

Misinformation

  • Identify opportunities to expand cooperation on misinformation/disinformation.

  • Include the G7 Code of Conduct for AI in trade agreements.

Explainable and interpretable results

  • Commit to cooperate on the development of international AI standards.

  • Develop a TBT-style commitment to base domestic regulation on international AI standards.

  • Agree to develop mutual recognition agreements related to conformity assessment, and AI audits.

  • Cooperate on the development of technical solutions.

  • Agree to share best practices around AI governance.

Measuring AI risk and accountability

  • Develop a WTO Sanitary and Phytosanitary (SPS)-style commitment to base AI regulation on a risk assessment.

  • Commit to cooperate in the development of international AI standards.

  • Develop a TBT-style commitment to base domestic regulation on international AI standards.

  • Include the NIST AI Risk Management Framework as a trade commitment.

  • Agree to share experience on AI governance.

  • Include the G7 Code of Conduct for AI in trade agreements.

Copyright infringement

  • Agree to share developments in domestic laws and evolving approaches to foundational AI and copyright.

Note. FTA = free trade agreement; DEA = digital economy agreement.

4. Conclusion

Regulating foundational AI models to maximize the opportunities for human flourishing must at the same time address AI risk. As foundational AI models evolve and become more powerful, assessments of AI opportunities and risks will need to keep pace. This dynamic underscores the importance of regulation being nimble and adaptive. Second, the focus on trade agreements as an opportunity for building international cooperation in AI should not be seen as solely aimed at increasing market access for AI. Many of the proposed ways that trade agreements can build international cooperation in AI focus on making country-level AI regulation more effective. From this perspective, international AI cooperation in trade agreements should be seen as enabling domestic AI regulation. A related issue when building international cooperation on AI is that understanding what are acceptable AI risks requires baselines against which assessments can be made. For instance, should risks from AI-powered driverless cars be assessed against current risks from driving or against a higher standard? Different countries will answer this in different ways, leading to different regulatory outcomes. This leads to a final point, which is that there is a range of opportunities and risks from foundational AI not explicitly addressed in this article, such as the impact of AI on sustainable development and what AI will mean for national security. These risks were not discussed as these and other AI risks are best addressed in forums other than trade agreements.


Disclosure Statement

Joshua P. Meltzer has no financial or non-financial disclosures to share for this article.


References

Bostrom, N. (2011). Information hazards: A typology of potential harms from knowledge. Review of Contemporary Philosophy, 10, 44–79.

McKinsey Digital. (2023, June 14). The economic potential of generative AI: The next productivity frontier. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction


©2024 Joshua P. Meltzer. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.

Comments
0
comment
No comments here
Why not start the discussion?