How AI narrows our vision of climate solutions. . .and reinforces the status quo

Figure 3. Mentions of key actors in response to questions about responsibility for environmental challenges, by chatbot. Count data for the number of times a particular word on the y-axis was mentioned in responses to the prompts: who causes <environmental challenge>’, ‘who is most responsible for <environmental challenge>’, and ‘can you be more specific and name some names’. OpenAI chatbots are shaded green, Anthropic chatbots are shaded brown. Governments were identified as the actor most responsible for environmental challenges.

A new study demonstrates how chatbots bias public discourse in favor of modest, incremental tweaks to climate policy and environmental behavior

February 4, 2025 in Anthropocene

AI-powered chatbots tend to suggest cautious, incremental solutions to environmental problems that may not be sufficient to meet the magnitude and looming time scale of these challenges, a new analysis reveals. The study suggests that the large language models (LLMs) that power chatbots are likely to shape public discourse in a way that serves the status quo.

People have debated whether AI will ultimately be good (the technology can reduce the human effort involved in environmental monitoring and analysis of large databases) or bad (it has a massive energy and carbon footprint) for the environment.

The new study shows “that energy use is one small part of AI’s broader environmental footprint,” says study team member Hamish van der Ven, an assistant professor at the University of British Columbia in Canada who studies sustainable supply chains and online environmental activism. “The real damage comes from how AI changes human behavior: for example, by making it easier for advertisers to sell us products we don’t need or by causing us to see environmental challenges as things that can be dealt with by modest, incremental tweaks to policy or behavior.”

Van der Ven and his colleagues developed a series of 14 prompts about the definition, evidence for, causes, consequences, and potential solutions to environmental problems. Three researchers separately asked each of four chatbots these questions regarding each of nine environmental challenges: climate change, biodiversity loss, deforestation, air pollution, water scarcity, ocean acidification, soil erosion and degradation, fisheries decline, and marine plastic pollution.

The researchers analyzed the resulting set of 1,512 chatbot responses and coded different sources of bias according to a list they had assembled based on previous studies of bias. They also counted the occurrence of particular words in the responses for a quantitative measure of chatbot bias.

The team chose to query the chatbots ChatGPT and GPT4 from OpenAI and Claude-Instant and Claude2 from Anthropic because they wanted to know if bias was present in chatbots from multiple companies, and if newer versions of chatbots have less bias than older ones.

 

Recommended Reading:
It’s time to talk about the carbon footprint of artificial intelligence

 

Multiple chatbots’ answers to questions about a diverse suite of environmental challenges contain consistent sources of bias, the researchers report in the journal Environmental Research Letters. And the updated chatbots are just as biased as the older ones.

First and foremost, chatbots tend to propose incremental solutions to environmental problems rather than considering more radical solutions that could upend the economic, social, or political status quo.

“It surprised me how much AI recommends public awareness and education as solutions to challenges like climate change, despite the overwhelming evidence suggesting that public awareness doesn’t work,” van der Ven says.

Chatbots mention businesses as having some responsibility for environmental problems, but overlook the role of investors and finance. In terms of making changes to solve environmental problems, the chatbots emphasize the responsibility of governments and public policy levers, while rarely mentioning businesses or investors.

They mainly offer quantitative information produced by disproportionately male scientists in industrialized societies via the Western scientific method, downplaying local and indigenous knowledge.

The chatbots do mention that indigenous peoples are especially vulnerable to environmental challenges – but they leave out other marginalized groups such as women and black communities who are also at risk. In general, the chatbots resist linking environmental problems with broader questions of social justice.

“The oracular way in which chatbots present information makes them a particularly insidious source of bias,” the researchers write. “Chatbots provide concise and relevant responses within a single textbox, often in an authoritative tone that can imbue them with an air of wisdom.”

As a result, people tend to see chatbots as neutral purveyors of facts, when in fact they reflect biases and implicit values just like any other media source. The consequences of this will take further research to untangle. “A big question is how widely LLMs are used by policymakers are people in positions of power in relation to environmental challenges,” van der Ven says. “The more widely LLMs are used, the more problematic their biases become.”

Source: van der Ven H. et al. “Does artificial intelligence bias perceptions of environmental challenges.” Environmental Research Letters 2025.

Pledge Your Vote Now
Change language