Environmental impact of large language models in medicine

Oliver Kleinig, Shreyans Sinhal, Rushan Khurram, Christina Gao, Luke Spajic, Andrew Zannettino, Margaret Schnitzler, Christina Guo, Sarah Zaman, Harry Smallbone, Mana Ittimani, Weng Onn Chan, Brandon Stretton, Harry Godber, Justin Chan, Richard C. Turner, Leigh R. Warren, Jonathan Clarke, Gopal Sivagangabalan, Matthew Marshall-WebbGenevieve Moseley, Simon Driscoll, Pramesh Kovoor, Clara K. Chow, Yuchen Luo, Aravinda Thiagalingam, Ammar Zaka, Paul Gould, Fabio Ramponi, Aashray Gupta, Joshua G. Kovoor, Stephen Bacchi

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

The environmental impact of large language models (LLMs) in medicine spans carbon emission, water consumption and rare mineral usage. Prior-generation LLMs, such as GPT-3, already have concerning environmental impacts. Next-generation LLMs, such as GPT-4, are more energy intensive and used frequently, posing potentially significant environmental harms. We propose a five-step pathway for clinical researchers to minimise the environmental impact of the natural language algorithms they create.

Original languageEnglish
JournalInternal Medicine Journal
DOIs
Publication statusAccepted/In press - 2024
Externally publishedYes

Keywords

  • environment
  • large language model
  • water use

ASJC Scopus subject areas

  • Internal Medicine

Cite this