This position paper pushes against policy panics engendered by generative AI technologies, arguing that regulation that centers experiences from the Global South requires a more intersectional and inclusive approach.
Keywords: Global South, AI regulation, tech policy
The proliferation of generative AI has been seen through the lens of a policy, societal, and ethical emergency, verging on panic in some circles—presented either in apocalyptic terms or as an unmitigated economic boon. The material impact of these technologies on the lived experiences of individuals and communities, particularly those on the margins, is often obscured in discourse I term ‘policy panic,’ the policy equivalent of moral panics, that have been popularly defined as periods during which ideas or groups of people are seen as a threat, often through exaggeration, to societal values and interests (Cohen, 2002).
The language of emergency tends to create exceptionalism regarding points in history and technologies, forcing us to think of these moments abstracted from their structural and historical context. The panicked policymaking stemming from this often seeks to control technology, rarely from the lens of redistribution of power to the collective or the disenfranchised. Calls for regulation from within big tech, encapsulated in Sam Altman’s testimony to the U.S. Congress, are superficial responses to this panic—based on obfuscations and avoiding any detailed proposals (Kang, 2023). These policy discussions thus create logics of control concentrated in actors such as the state, self-regulatory schemes of big tech, or highly specialized experts. AI policy discourse in particular lends itself to nonparticipatory policymaking given the smokescreens that render AI processes nontransparent and unintelligible to the people and communities they impact.
AI regulation that seeks to engage with the technology and its implications is more often than not centered in the Global North, corresponding with the jurisdictions where AI is being designed, developed, and produced. The Global South, on the other hand, has been relegated to a passive role both in the production of AI and production of its regulation. Additionally, knowledge production about AI’s social life and impact has largely ignored the Global South, often painting potential impact in broad strokes.
This is not to say that strides are not being made in the Global South. A report by Initiate highlighted fragmentation in AI regulation and norm-setting efforts, tracking over 160 sets of AI ethics and governance principles across the world by late 2022, and this number has only increased (Initiate, 2022). For instance, in South Asia, AI strategies and frameworks are being proposed or adopted (Information and Communication Technology Division, 2020; Ministry of Information Technology and Telecommunication, 2023; National Institute for Transforming India (NITI) Aayog, 2018). These strategies, however, shy away from imposing regulatory regimes, rather focusing on incentive structures to facilitate AI while adhering to vague ethical criteria. The primary impetus remains economic growth. Furthermore, while excellent research from Global South perspectives centering and decentering the disparate impact of AI continues, its application to the actual design of the technology and regulation remains scant.
The impact of synthetic and generative AI on the disinformation landscape in the Global South is worrisome given its ability to scale information production at unprecedented levels. AI-generated products could be seen as more reliable, especially in contexts where trust in media is low and state control is high. In its recent report, Deepfake It Till You Make It, Graphika documents pro-Chinese influence operations involving fictitious people created through generative AI (Graphika Team, 2023). In low-literacy, particularly low digital–literacy, contexts, the spread of disinformation through AI-generated content is a legitimate worry notwithstanding the skepticism regarding the panics highlighted above. While panic does not make for a conducive regulatory environment, it does us no good to dismiss these legitimate concerns.
In the Global South, there is a real possibility of AI optimism being adopted by governments as a quick fix to bad governance. Myth-making about efficiency and objectivity become attractive solutions for governance systems plagued by underresourced and corrupt systems. Shehla Rashid posits in her essay that large repositories of public data, often in the absence of data protection guardrails, are ripe for extraction for these models (Rashid, 2022). These interventions, however, rarely result in any improvement in the provision of welfare. There exist layers of exclusion in the Global South context, where what public data is digitized and thus useful to AI models is often determined by a litany of factors, including class, geography, and identity. The extent to which regulation can truly address these intersecting structural problems is moot.
Coming to the business of regulation for those situated in the Global South, the actual producers of AI are often far removed from these contexts in terms of legal jurisdictions, which can make direct accountability for harms nearly impossible. While laws such as the Digital Services Act and proposed EU AI Act can impose tangible obligations on tech companies and AI producers, similar legislation would be an exercise in futility in other contexts where these companies have no incentive to make themselves accountable (Regulation 2022/2065). This is compounded by the lack of expertise present in governance structures and civil society actors working under limited resources and constraints on their advocacy due to shrinking civic spaces.
Resultantly, AI regulation often replicates the flows of technology; the Global South is merely rendered a recipient of legislation, passively adopting standards and norms determined elsewhere.
However, strides are being made to develop global norms and instruments that speak to these concerns. For instance, UNESCO’s “Recommendation on the Ethics of Artificial Intelligence” attempts global norm-setting regarding the AI lifecycle (UNESCO, 2022). Jun-E Tan points out the move toward AI constitutionalism, which is often envisioned as an assemblage of regulatory and norm-setting instruments around AI (Tan, 2020). Tan posits this constitutionalism needs to reflect different realities, particularly those from the Global South. The process of arriving at a constitutionalism in a participatory and inclusive manner, on a subject such as AI, however, is a more challenging ask. But there are newer ways of thinking about AI and emerging technologies that break existing molds of corporate ethics-washing and state-capture, centering experiences from below and the margins (Varon & Peña, 2021).
Current regulation of AI suffers from a crisis of imagination and parochial conceptions of this application. But as we land somewhere between ‘techno-pessimism’ and ‘techno-fundamentalism,’ we cannot let policy panics distract from the important need for accountability for those most acutely vulnerable to AI’s impact, who are often those most likely to be shut out of decision-making (Initiate, 2022).
Shmyla Khan has no financial or non-financial disclosures to share for this article.
Cohen, S. (2002). Folk devils and moral panics: The creation of the Mods and Rockers. Routledge.
Graphika Team. (2023). Deepfake it till you make it. Graphika. https://public-assets.graphika.com/reports/graphika-report-deepfake-it-till-you-make-it.pdf
Information and Communication Technology Division. (2020, March). National strategy for artificial intelligence: Bangladesh. Division Government of the People’s Republic of Bangladesh. https://ictd.portal.gov.bd/sites/default/files/files/ictd.portal.gov.bd/legislative_information/c2fafbbe_599c_48e2_bae7_bfa15e0d745d/National%20Strategy%20for%20Artificial%20Intellgence%20-%20Bangladesh%20.pdf
Initiate. (2022). Beyond the north-south fork on the road to AI governance: An action plan for democratic & distributive integrity. Initiate: Digital Rights in Society and Paris Peace Forum. https://digitalrights.ai/report/i-ai-governance-at-a-crossroads-fragmentation-vs-coordination/
Kang, C. (2023, May 16). OpenAI’s Sam Altman urges A.I. regulation in Senate hearing. The New York Times. https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html
Ministry of Information Technology and Telecommunication. (2023). Draft national artificial intelligence policy 2023. Government of Pakistan. https://moitt.gov.pk/Detail/ZTM4NmI3MDAtZmM0OC00MzJlLThhODAtMWVhNWE4MmJmMDU5
National Institute for Transforming India (NITI) Aayog. (2018). National strategy for artificial intelligence. Government of India. https://niti.gov.in/sites/default/files/2019-01/NationalStrategy-for-AI-Discussion-Paper.pdf
Rashid, S. (2022). Intelligent but gendered: Lessons from welfare automation in the Global South. IT for Change, European Union (EU), and Friedrich-Ebert-Stiftung (FES). https://itforchange.net/sites/default/files/add/EU-Think-Piece-Shehla%20Rashid.pdf
Regulation 2022/2065. Regulation (EU) No 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and Amending Directive 2000/31/EC (Digital Services Act). http://data.europa.eu/eli/reg/2022/2065/oj
PTan, J. (2020). Imagining the AI we want: Towards a new AI constitutionalism. In S. Sarkar & A. Korjan (Eds.), A digital new deal: Visions of justice in a post-Covid world (pp. 218–230). Just Net Coalition and IT for Change. https://projects.itforchange.net/digital-new-deal/2020/11/01/imagining-the-ai-we-want-towards-ai-constitutionalism/
UNESCO. (2022). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137
Varon, J., & Peña, P. (2021, March 5). Building a feminist toolkit to question A.I. systems. Not My AI. https://notmy.ai/news/algorithmic-emancipation-building-a-feminist-toolkit-to-question-a-i-systems/
©2024 Shmyla Khan. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.