Skip to main content
SearchLoginLogin or Signup

Carbon Emissions in the Tailpipe of Generative AI

Published onAug 20, 2024
Carbon Emissions in the Tailpipe of Generative AI
·

Abstract

This article responds to the call for exploring the wider societal risks and impacts of generative AI, particularly its environmental costs. Through a review of the available evidence on large language model’s (LLM) carbon and water costs, we point out that generative AI technologies are distinctly resource intensive. We argue that the field must reframe the scope of machine learning research and development to include carbon and other resource considerations across the lifecycle and supply chain, rather than setting these aside or allowing them to remain on the field’s margins.

Keywords: artificial intelligence, carbon emissions, climate change, environmental justice


1. Introduction

Recall, if you can, the frenzy of interest in blockchain technologies, from Bitcoin to metaverse applications. While proponents considered these to be paradigm-shifting innovations, critics argued that the burgeoning interest in crypto futures was driven by the influencers and investors who stood to profit from this speculation. As spending on crypto in forms like non-fungible tokens (NFTs) accelerated, a vocal contingent took a stand against NFTs on the basis of their outsized carbon footprint—revealing a moral hierarchy of energy uses (Cooper, 2024). Even amid the crypto winter, the hardware and infrastructures that were attached to cryptocurrency mining enterprises are still associated with rising emissions (Neumueller, 2022), indicating that short-sighted, venture capital-backed speculative investments may have long-term repercussions.

Notably, that same dialog about the ecological impact of crypto as a set of computational approaches is not present to the same degree in our collective turn toward generative AI, despite similarities in their ecological impact. While researchers consistently call out of the various ecological impacts of generative AI and there have been advancements in regulating AI’s growing environmental footprint—with the Artificial Intelligence Impacts Act of 2024, S.3732, 118 Cong. (2024), calling for empirical studies to produce standards for measuring the full spectrum of AI’s environmental impacts and the EU AI Act’s (2024) call for a standardization process for determining AI’s environmental impacts— there is still a hope that AI itself may offer climate solutions, despite a lack of evidence for such solutions (Warso & Shrishak, 2024), and critics argue that these policy measures rely too much on voluntary compliance and thus fall short of the meaningful change that is needed to make AI less of an environmental threat (Crawford, 2024). AI is being used to accelerate oil extraction (Paroha, 2024) and AI’s overall energy demand is growing in a way that outpaces existing renewable energy infrastructure, straining the grid and leading companies like Microsoft to fail to meet their Net Zero goals because of generative AI’s need for expanded data centers, which themselves are carbon intensive to manufacture (Rathi & Bass, 2024).

Large language models (LLMs) have ignited interest in AI because of their wide-ranging capacities across applications. But their production and deployment come at a cost: their carbon dioxide emissions and reliance upon other resources including water and land. LLMs are not unique in the industry: all information and communication technologies leave an indelible impact on the environment, from the metals mined for hardware, to the water consumed by data centers, to the electricity used to power an increasingly computerized world. But like crypto, LLMs are so computationally intensive as to accelerate the depletion of resources at a critical time.

Rather than consider carbon impact as part of the frontier of machine learning innovation, it is often considered to be out-of-scope or even set aside. For instance, Geoffrey Hinton, a prominent figure in AI who recently resigned his high-profile position at Google, argued that AI poses a more pressing existential threat to humanity than does climate change, in part because we know how to solve the problem of climate change (Coulter, 2023). Here, Hinton defines the research agenda of the field as orthogonal to climate. But it is precisely AI’s climate impacts that make it a source of risk to our collective survival (Kneese, 2024).

Machine learning research must not be driven by the loudest voices, but by real-world problems. We call on the field to include carbon emissions and other environmental factors, including downstream impacts to communities and ecosystems, as part of its design space—that is, the dimensions by which the field explicitly innovates and seeks to evolve. Responsibility in this regard lies with developers and designers as well as researchers, advocates, and policymakers to weigh the costs and benefits of generative AI and to fully understand its environmental and human repercussions. Below, we review scientific papers that grapple with the various environmental impacts of LLMs and how we might make LLMs more sustainable and equitable. A key strategy is to adopt a holistic approach that considers a range of factors, including the use of resources along with labor and community impacts across the supply chain and the AI development lifecycle. There is no singular technological solution to AI’s climate impacts because of geographic particularities and global inequalities.

2. Why Are LLMs So Bad for the Environment?

LLMs are distinct in the computational toll they exact in their training and deployment, having downstream effects on the environment and on marginalized communities (Bender et al., 2021). While LLMs are defined by the enormous data sets on which they are trained and billions of parameters, researchers have less visibility into the resources consumed in an absolute sense by different computational approaches. Integrating LLMs into search engines may increase the carbon footprint of conducting a single Internet search by as much as five-fold (Stokel-Walker, 2023). When applied at scale, the effects on global carbon emissions could be devastating. The higher consumption of resources is also reflected in its cost to produce. Strubell et al. (2020) characterize the cost in dollars and carbon emissions of off-the-shelf natural language processing (NLP) models, finding substantial differences between them and calling on R&D to prioritize computational efficiency.

For instance, training the Transformer-based BERT model on a GPU has a carbon impact “roughly equivalent to a trans-American flight,” with these costs estimated to rise by an order of magnitude (or many) across model tuning and retraining (p. 4). The authors’ comparative figures illustrate how the choices that developers make in model selection matter in the material costs of their computational tools. Other researchers argue that the implementation of best practices, including the selection of more efficient models, will lead to the eventual plateauing of the carbon emissions tied to training LLMs, although this study considers LLM training (Patterson et al., 2022), but not the larger environmental footprint connected to data center construction and the rest of the AI lifecycle. Furthermore, the adherence to best practices depends on organizational and social factors, including the priorities of developers, their managers, and the C-suite.

3. What Can Developers Do?

The rising field of green AI examines strategies not only for measuring but also for mitigating AI’s climate impacts. These include:

  • Track emissions across the AI lifecycle. Researchers at Hugging Face have called for an examination of the lifecycle impacts of LLMs on the environment (Luccioni et al., 2023). Rather than only measuring the carbon emissions associated with training a model, they also included the emissions tied to the manufacturing of the equipment used. A recent study examines the carbon cost of general purpose AI inferences, and finds that general purpose, generative architectures have dramatically higher carbon costs than use-specific systems (Luccioni et al., 2024). The true impact of AI production and use is connected to the larger supply chains, and the poor working conditions, of the entire information and communications technology (ICT) industry from the extraction of raw materials to hardware disposal (Kneese, 2023).

  • Implement carbon aware software. Developers can ensure that ML training is happening at times of day and in regions where there is more renewable energy available on the grid, considering AI’s carbon intensity (even the most efficient training, however, has a significant climate impact; Dodge et al., 2022).

    While it is possible to shift workloads to less carbon-intensive regions or more optimal times of day when it comes to training AI, developers do not always have full control over their working conditions and managers may not prioritize green AI training practices. It is also not always possible to shift user-facing inference accordingly. Researchers looked at ChatGPT to assess its carbon emissions projected to the year 2035, when presumably there will be more renewable energy sources available. Chien et al. (2023) found that by employing intelligent request direction algorithms, which channel queries based on the carbon intensity of the power grid, the emissions tied to the user interface could be reduced.

    Carbon awareness is not a silver bullet, however: with larger models, the carbon emissions tied to LLM workloads tend to be quite high, even when developers strive to use renewable energy sources:

    “Perhaps unsurprisingly, even the most efficient region of those we examined for that experiment still leads to more emissions than a full barrel of oil. If this had been trained to completion, we estimate it would have emitted 21 to 78 metric tons of CO2 (depending on the region it was run in)” (Dodge et al., 2022, p. 8).


    Some green AI researchers have argued that a focus on carbon awareness also leaves aside the larger fundamental question of supply, and whether the tech industry should be determining where the world’s energy resources are directed (Velasco, 2024). Nafus et al. (2021) have called for carbon-responsive computing that enlists the research support of social scientists who can also holistically assess AI systems for their social impacts in tandem with their quantitative climate-related costs.

  • Consider carbon cost in tandem with water cost. Along with energy, LLMs require a massive amount of water (Li, Yang, Islam, & Ren, 2023). One problem with only considering the carbon emissions associated with LLMs and ignoring other environmental impacts is that optimizing for reducing the carbon emissions of training a model may actually exacerbate the water cost. It is important for technologists to consider a range of factors and analyze the tradeoffs.


    Some researchers call for environmentally equitable AI through geographical load balancing, accounting for both the carbon and water footprints of AI models (Li, Yang, Wierman, & Ren, 2023). AI’s impact is greater in particular geographic regions, and tends to be especially problematic in the Global South and in drought-stricken areas.

  • Audit systems for environmental justice impact. Thus, environmental justice and equity should be at the center of assessing the sociotechnical environmental impacts of AI. Rakova and Dobbe (2023) argue that

    “an [environmental justice] approach to the algorithmic impact assessment process would involve the consideration of material resource flows along the entire lifecycle of an algorithmic system including the supply chains of the digital infrastructure on which it runs. Learning from and building meaningful relationships with existing civil society actors, grassroots movements, and local communities, there’s a need to understand how algorithmic systems disrupt these flows leading to higher order impacts to the livelihoods of people and the resiliency of environmental ecosystems” (p. 9).

    Rather than merely creating software tooling that measures and reports carbon emissions or other environmental factors, it is important for developers to consider and actively engage with the communities and ecosystems most affected by their products.

Much of the hype around generative AI focuses on speculative futures, either by foregrounding potential existential risks or potential sites of financial investment. But, with an eye toward the recent rise and fall of crypto, technologists, researchers, and advocates should look to the very real and already existing climate impacts of LLMs and other AI technologies.


Disclosure Statement

Tamara Kneese and Meg Young have no financial or non-financial disclosures to share for this article.


References

Artificial Intelligence Environmental Impacts Act 2024, S.3732, 118 Cong. (2024). https://www.congress.gov/bill/118th-congress/senate-bill/3732.

Bender, E., Gebru, T., Mcmillan-Major, A., & Mitchell, M. (2021). On the dangers of stochastic parrots: Can language models be too big? In FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). ACM. https://doi.org/10.1145/3442188.3445922

Chien, A. C., Lin, L., Nguyen, H., Rao, V., Sharma, T., & Wijayawardana, R. (2023). Reducing the carbon impact of generative AI inference (today and in 2035). In G. Porter & T. Anderson (Eds.), HotCarbon ’23: Proceedings of the 2nd Workshop on Sustainable Computer Systems (Article 11). ACM. https://doi.org/10.1145/3604930.3605705

Cooper, Z. G. T. (forthcoming). BIT/COIN/RARE/EARTH: Data, energy, and extraction across the Arctic [Unpublished doctoral dissertation]. University of Pennsylvania.

Coulter, M. (2023, May 8). AI pioneer says its threat to world may be more urgent than climate change. Reuters. https://www.reuters.com/technology/ai-pioneer-says-its-threat-world-may-be-more-urgent-than-climate-change-2023-05-05/

Crawford, K. (2024, February 20). Generative AI’s environmental costs are soaring — and mostly secret. Nature. https://www.nature.com/articles/d41586-024-00478-x

Dodge, J., Prewitt, T., Des Combes, R. T., Odmark, E., Schwartz, R., Strubell, E., Luccioni, A.S., Smith, N.A., DeCario, N., & Buchanan, W. (2022). Measuring the carbon intensity of AI in cloud instances. In FAccT ’22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1877–1894). ACM. https://doi.org/10.1145/3531146.3533234

Kneese, T. (2023). Climate justice & labor rights. SSRN. http://dx.doi.org/10.2139/ssrn.4533853

Kneese, T. (2024, February 12). Measuring AI’s environmental impacts requires empirical research and standards. Tech Policy Press. https://www.techpolicy.press/measuring-ais-environmental-impacts-requires-empirical-research-and-standards/

Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI less “thirsty”: Uncovering and addressing the secret water footprint of AI models. ArXiv. https://doi.org/10.48550/arXiv.2304.03271

Li, P., Yang, J., Wierman, A., & Ren, S. (2023). Towards environmentally equitable AI via geographical load balancing. ArXiv. https://doi.org/10.48550/arXiv.2307.05494

Luccioni, S., Jernite, Y., & Strubell, E. (2024). Power hungry processing: Watts driving the cost of AI deployment? In FAccT ’24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (pp. 85–99). ACM. https://doi.org/10.1145/3630106.3658542

Luccioni, S., Viguier, S., & Ligozat, A. (2023). Estimating the carbon footprint of BLOOM, a 176B parameter language model. Journal of Machine Learning Research, 24(1), Article 253.

Nafus, D., Schooler, E., & Burch, K. (2021) Carbon-responsive computing: Changing the nexus between energy and computing. Energies, 14(21), Article 6917. https://doi.org/10.3390/en14216917

Neumueller, A. (2022, September 27). A deep dive into Bitcoin’s environmental impact. University of Cambridge Judge Business School. https://www.jbs.cam.ac.uk/2022/a-deep-dive-into-bitcoins-environmental-impact/

Paroha, A. (2024, April 23). AI’s revolutionary impact on upstream oil and gas transformation. IEEE Computer Society. https://www.computer.org/publications/tech-news/trends/ai-impact-on-oil-and-gas

Patterson, D., Gonzalez, J., Hölzle, U., Le, Q., Liang, C., Munguia, L. M., Rothchild, D., So, D. R., Texier, M., & Dean, J. (2022). The carbon footprint of machine learning training will plateau, then shrink. Computer, 55(7), 18–28. http://doi.org/10.1109/MC.2022.3148714

Rakova, R., & Dobbe, R. (2023). Algorithms as social-ecological-technological systems: An environmental justice lens on algorithmic audit. In FAccT ’23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (p. 491). ACM. https://doi.org/10.1145/3593013.3594014

Rathi, A., & Bass, D. (2024, May 15). Microsoft’s AI push imperils climate goal as carbon emissions jump 30%. Bloomberg. https://www.bloomberg.com/news/articles/2024-05-15/microsoft-s-ai-investment-imperils-climate-goal-as-emissions-jump-30

Stokel-Walker, C. (2023, February 18). The generative AI race has a dirty secret. Wired. https://www.wired.com/story/the-generative-ai-search-race-has-a-dirty-secret/

Strubell, E., Ganesh, A., & McCallum, A. (2020). Energy and policy considerations for modern deep learning research. Proceedings of the AAAI Conference on Artificial Intelligence, 34(9), 13693–13696. https://doi.org/10.1609/aaai.v34i09.7123

Velasco, I. (2024, January 16). Carbon aware computing: Next green breakthrough or new greenwashing? HackerNoon. https://hackernoon.com/carbon-aware-computing-next-green-breakthrough-or-new-greenwashing

Warso, Z., & Shrishrak, K. (2024, May 21). Hope: The AI Act’s approach to address the environmental impact of AI. Tech Policy Press. https://www.techpolicy.press/hope-the-ai-acts-approach-to-address-the-environmental-impact-of-ai/


©2024 Tamara Kneese and Meg Young. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.

Comments
0
comment
No comments here
Why not start the discussion?