The Norwegian public sector has been quick to explore generative AI. Early adoption and experimentation in the public sector can act as unique probes into both strengths and shortcomings of this technology, provided adequate mechanisms are in place to relate instructive feedback to public debates on how societies should govern the challenges that arise. We call for increased collaboration through public–academic cross-disciplinary partnerships and bolstered frameworks for engagement and change agency.
Keywords: generative AI, regulatory sandbox, Norway, public good
Things are stirring in Norway. A year after ChatGPT made its public debut, it is widely agreed that generative AI is poised to change Norwegian society. But how will it change? Rarely has a novel technology simultaneously mobilized and engaged so many different disciplines, institutions, and organizations so widely and so quickly in search of an answer: law offices, publishers, cultural institutions, trade unions, the aid sector, the consumer council, and various public bodies are voicing both concern and delight.
The Norwegian Digitalization Agency was quick to release guidance tailored to the public sector, encouraging feedback and community engagement to streamline governance. Teknologirådet, a parliamentary advisory body, is exploring the impact of generative AI on democracy and freedom of speech. The Norwegian Artificial Intelligence Research Consortium aims to build a Scandinavian language model and infrastructure to fine-tune models to public sector needs. The Norwegian Office of the Auditor General has initiated an audit into the responsible and trustworthy implementation and deployment of AI in public administration.
Public agencies have also been quick to adopt and engage with generative AI. Ruter-GPT is tuned to Norwegian travel and transport data. GPT UiO from the University of Oslo boasts enhanced and tailored privacy protections. The Norwegian Tax Administration and Innovation Norway, along with others, have been experimenting with internal implementations of ChatGPT and drafted their own guidelines for use. The list goes on.
This early experimentation in the public sector may appear counterintuitive. Afterall, some of the most high-risk scenarios are found precisely in the public sector. It may seem prudent to proceed with caution. However, if the lessons of the past are anything to go by, it would be a mistake to let ‘Big Tech’ alone pave the way.
Firstly, generative AI raises thorny questions that are unlikely to find adequate design and governance solutions outside the realm of the public agencies that seek to deploy them. Questions of how well generative AI aligns with human values and intentions, for example, must be assessed in relation to the values encoded in the policy directives and regulatory frameworks that govern public administration. Similarly, questions of harms posed by erratic model behavior at scale and over time must be viewed through a social lens, with due appreciation for the rule of law and the latent asymmetry of power between citizen and state.
Secondly, public agencies are already subject to comprehensive regulatory regimes and several established checks and balances. How well these moats hold up in the face of technological change is often a core source of uncertainty and anxiety, and hence central to policy development.
Thirdly, the ‘constraints’ imposed on the public institutions are typically designed to place the wider interest of the public above the narrower interest of the deploying agency. In many instances this may provide a far better point of departure for service innovations with generative AI.
Public agencies should embrace and explore generative AI, because they provide a sounder test bed to both probe the strengths and expose the shortcomings of new technologies. The public sector also provides an optimal platform for relating instructive feedback to public debates on how societies should seek to govern them.
As such, it is uniquely placed to lead and set the standard for responsible and ethically sustainable practices in uptake and deployment of generative AI in society at large.
Success hinges on our ability to leverage public sector experiments with generative AI as stress tests for governance. The Norwegian Data Protection Authority’s regulatory sandbox provides one such arena, but without the right tools and instruments to facilitate wider cross-disciplinary and societal engagement, we risk building castles in the sand. To build a more sustainable foundation, sandboxes must be enhanced by public–academic cross-disciplinary partnerships:
When established practice is thin, regulatory sandboxes are useful tools to uncover and navigate legal grey zones and shortcomings in legislation. Early experiences in Norway show promise in this regard. Lifting the veil on such uncertainty is key to ensuring sound governance and responsible adoption. However, examining issues through a single lens, such as privacy, has limitations, as highlighted by the experiences of the Norwegian Labor and Welfare Administration. To facilitate debates about how society should respond, sandboxes must be accompanied by mechanisms for broader academic engagement and change agency.
Sandboxes bring up many unexplored cross-disciplinary questions that merit attention from academia. Methods for explaining generative AI models need to be coupled with studies of how design and implementation impact utility and trust. Tools to probe biases and assess fairness need to be coupled with a legal analysis of their applicability. The public sector provides a unique testbed for a cross-disciplinary academic exploration of these topics, precisely because it requires robust and holistic solutions to these challenges for its proper functioning.
The fruits of these interactions must find their way into how public services are conceived, designed, operated, and managed. The public sector should build capacity to shape digital tools in the interest of the wider public, rather than rely solely on market-based solutions. Research must be joined up with iterative software and service delivery cycles in the public sector to stimulate both cross-disciplinary inspection and wider societal engagement with these issues. To achieve these interactions, research incentives must be realigned: reward mechanisms for engagement with public sector agencies must be created and operationalized.
A vibrant start-up scene around applied research has long been a hallmark of success. Fostering a similar culture of innovation between academia and public agencies may go a long way in our collective efforts to shape this technology in our fashion.
We would like to thank our colleagues at Norwegian Artificial Intelligence Research Consortium (NORA.ai) and the Norwegian Labor and Welfare Administration (NAV) for their input and comments.
Alex Moltzau and Robindra Prabhuhave no financial or non-financial disclosures to share for this article.
©2024 Alex Moltzau and Robindra Prabhu. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.