Skip to content

Can ChatGPT Save the Planet?

A look at how artificial intelligence will interact with our efforts to deal with climate change.

I will remember 2022 as a breakthrough moment for artificial intelligence (AI). I have luddite tendencies, and I worry a lot about how our society will respond to rapid changes unleashed by advances in AI. So I reacted to the public releases of AI products this past year with a mixture of fascination and concern.

As a climate scholar, I find it natural to wonder how these forces will interact with climate change. Will AI help humanity cut emissions and adapt to climate change, or will it only make matters worse? I’ve come to believe that the right answer depends on whether you think the real challenge with addressing climate change is a technological problem or a power struggle.

Image generated by DALL-E 

(prompt: “robot scientist working with human to save the environment.”)

AI tools are amazing

To think this column is worth reading, you have to already have had an “AI moment.” If you haven’t already spent hours pondering the avocado chair, stared in awe at a GPT-3 demo, or tried to top, “write a biblical verse in the style of the king james bible explaining how to remove a peanut butter sandwich from the VCR,” then stop reading. Go do those things for a while. Then take a long walk, think about humanity, and hug someone you love.

The first prompt I fed to ChatGPT was “write a column in the style of Severin Borenstein.” (I assume this is a common entry.) Three seconds later, I had seven passable paragraphs endorsing electric vehicle adoption, but noting challenges related to range anxiety, charging infrastructure, and pollution associated with upstream electricity generation from coal and natural gas. It was too milquetoast to have actually been written by Severin, but it was a eureka moment. I could immediately see both how this tool could make me better at some parts of my job and how a moderately improved version could replace me.

The default view is techno-optimism

I’ve come to think of the impact of AI on the climate in three tiers. The third, most distant, tier is some hard-to-contemplate long-run: either robots take over the planet or they solve all our problems and leave us in a utopia. The implications for climate change make for a fun conversation over drinks, but it is too speculative for the EI blog, even for me.

The first, most proximate, tier is what you find when you Google phrases like “AI and climate change” or “will AI help or hurt the climate?” What mostly comes back are consulting reports and mission statements from socially-conscious start-ups claiming a variety of ways in which advances in data analytics can help mitigation and adaptation efforts. Highlights include smarter grids, improved transportation networks, highly accurate disaster prediction, and ways of generating better data–on emissions, on mitigation, on forestry, on vulnerable populations.

I generated the text of this slide from ChatGPT (prompt: “bullet points about synergy between AI and climate mitigation and adaptation, in style of consulting PowerPoint deck”); the image is from DALL-E (prompt: “synergy symbol, style of consulting presentation”); the arrangement is from PowerPoint’s automated Design feature.

Much of this is credible, and improved data analytics will surely help. But the richest questions are in the middle tier, where the impacts are harder to predict.

A few observations on the middle tier

The middle tier in my taxonomy is the way in which vastly improved AI will drive changes in productivity, the economy, and society, over something like the next five to fifteen years. Here, the point is that AI is a new, fundamental tool that promises to enhance the productivity and creativity of some workers, to displace many others, and to shift concentrations of wealth and power. I predict that big changes are coming, and faster than most people realize.

How does all this impact the climate?

First, AI might accelerate economic growth. Many think growth will undermine climate progress, with some adhering to the view that degrowth is necessary. There is certainly a positive link between economic activity and emissions, which gives this view an obvious logic. Deep decarbonization, however, requires significant structural and societal change, and change is easier to achieve when the economic pie is growing.

Second, the rise of AI may aid climate innovation. Lots of research in economics has suggested that innovation is slowing down, but advances in AI could reverse that trend. And, we need more innovation to deal with the climate, not just in science and technology, but also in policy and economics, in marketing and communication.

At the same time, AI’s innovation potential is not limited to green technologies. It could be used to drive down the cost of fossil fuel production and other industries that generate emissions. Economists use the term “directed innovation” to indicate when society, through policy or other forces, steers research and development in one direction or another. The critical question is whether we’ve reached the point where as a society we are willing to prioritize addressing climate change, so that innovation on balance be climate positive. I’m mostly optimistic.

What makes me pessimistic, however, is a third factor, how AI tools may induce social division. The suite of AI tools that are rapidly developing will disrupt many new industries and displace millions of workers, many of which now enjoy comfortable white collar desk jobs. This may ultimately prove to be a form of creative destruction that leads to a better future, but in the meantime it seems most likely to increase income inequality, create new concentrations of power and wealth, and sow social division. Because climate progress will require an unprecedented level of global cooperation, it is best pursued in a world with less social upheaval.

A fourth, and maybe most important, way that AI will impact climate is by making the politics of climate worse. AI tools like ChatGPT, in the words of Ezra Klein, “drive the cost of disinformation to zero.” Vested interests, from petro-states to old fashioned oil companies, that wish to obstruct the policy and social cohesion needed to accelerate the clean energy transition have a powerful new tool. Tools like ChatGPT have no obvious relationship to the truth. They are trained on prediction and association, and they can just as easily be used to persuade people of falsehoods as open them up to the truth.

Image generated by DALL-E 

(prompt: “robot holding the scales of justice with earth on the scale, in style of Banksy.”)

This month, the Ohio legislature passed a resolution that officially labels natural gas a “green” fuel, and Wyoming legislators introduced a bill to ban electric vehicles. In such a world, it is easy to imagine super-powered disinformation, applied asymmetrically, becoming a powerful tool for entrenched interests that swamp the beneficial effects of things like precision agriculture and improved grid analytics.

In the end, the optimistic case for AI aiding the climate makes sense if you think that the main problem with addressing climate change is a need for more technological solutions, and better and lower cost ways of implementing the technologies that we have. If, on the other hand, you view the climate struggle as a pitched battle between partisans, you may well come to a different, darker view.


I am a hall-of-fame worrier. I triple check that doors are locked when I leave a room and pester traveling loved ones to text me when they arrive at their destinations. I worry about earthquakes and floods, about social division and inequality, about the effects of social media and screen pollution. I worry about the status quo. I worry about change. And I really do worry a lot about AI.

At the same time, I can’t help but be charmed by the new AI tools. They are fun and clearly spark creativity. I thus find myself leaning towards an optimistic view of what they can do for the climate, and for humanity more generally. But to ultimately decide how I should really feel, I went to the source:

Keep up with Energy Institute blog posts, research, and events on Twitter @energyathaas.

Suggested citation: Sallee, James, “Can ChatGPT Save the Planet?”, Energy Institute Blog,  UC Berkeley, January 23, 2023,

James Sallee View All

James M. Sallee is an Associate Professor in the Department of Agricultural and Resource Economics at UC Berkeley, a Research Associate of the Energy Institute at Haas, and a Faculty Research Fellow of the National Bureau of Economic Research. He is a public economist who studies topics related to energy, the environment and taxation. Much of his work evaluates policies aimed at mitigating greenhouse gas emissions related to the use of automobiles.

14 thoughts on “Can ChatGPT Save the Planet? Leave a comment

  1. The SF Chronicle quoted OpenAI CEO Sam Altman today following the news that Microsoft may invest up to $10 billion: “Microsoft shares our values and we are excited to continue our independent research and work toward creating advanced AI that benefits everyone.”

    But what if my values aren’t aligned with Microsoft, Altman and all the Sand Hill Road types who are driving this (not to mention initial OpenAI investor and SF office space deadbeat Elon Musk).

    As Bruce Sterling says, this current AI wave in no way measures up to the original vision of the Dartmouth Workshop in 1956, and that is tragic. What remains to be seen is whether AI can be more than the implementation of global surveillance capitalism. What appears to be “free” may have a very high price.

    In Martin Mayer’s book Markets, three decades ago, commenting on the rise of what came to be known as high frequency trading, he wrote: “Computers are wonderful machines, but they never stop to think.”

    These things are pattern matching tools, not “intelligence” or anything that does “learning.” There is great value and great risk because of how they are designed — and at whose behest.

    AI/ML is dependent on inputs: significant prospect of garbage in. And they don’t “know” anything about accuracy or context: significant prospect of garbage out.

    Like any new tool it will take time to learn what they are good for, what they are not good for, and what the risks are. And there’s no AI for that.

  2. I just had another thought. I make the assumption that AI scours the internet, makes correlations and comes up with answers.

    If this is the case, then the AI device must filter out all content that was generated by AI. If it doesn’t do that, it will be in a positive feed back loop (control systems term) which is unstable. Think of the screech that occurs when you stick the microphone up to the speaker.

  3. Much emphasis is placed on new tools, research and advances in technology, but the harsh reality is that we have everything we need in hand and can’t find the will and vision to see that massively deploying those things we possess to decarbonize electricity production and electrify transportation to the maximum extent possible as soon as possible is what is urgently needed.

    More important, the past releases of CO2 have been energizing the northern oceans and atmosphere (climate) with a resulting release of methane (GWP 75-100) from clathrates has now substantially removed our ability to control climate change by cutting CO2 emissions. The greenhouse gases already in the atmosphere are now driving the release of more powerful greenhouse gases in an upward positive-feedback loop. AI can’t affect what people can’t affect.

    Note: Chemistry and physics and fluid mechanics and heat transfer are not part of ordinary AI.

  4. AI would tell the CPUC that the planet is worth more than financial profits of oversized private utilities and rejected NEM 3.0 in California as it is now written about rooftop solar. We only have 10% of the Residential roofs in California Covered with SOLAR PANELS and THAT NEEDED TO BE AT LEAST 70% TO MAKE A REAL DIFFERENCE IN CLIMATE CHANGE. Under NEM 3.0, AS WRITTEN, WE WILL BE LUCKY TO GET 15% of Roofs Covered in solar BY 2035, WHERE UNDER NEM 2.0 WE WERE HEADED TO 50% OF ALL CALIFORNIA ROOFS COVERED WITH SOLAR BY 2035. AI would have left 80% in the hands of homeowners and 20% would have gone to the utilities to cover infrastructure through 2028 and 70% left with homeowners until 2035 to meet the goals. Human greed and NEM 3.0 take 75% away from the homeowners and lines the pockets of greedy utilities and their stockholders and executives will profits and thus kills our planet. My 2-year-old great grandchild may never reach my age (76) with our governments not heading the science as we destroy our planet and ourselves for profits and the almighty dollar.

    • I have studied CA and solar in the first weeks of December produce insufficient solar energy to be a reliable power supply. The additional amount of solar capacity needs a lot of new transmission and far too much storage to be affordable or environmentally acceptable.

      • Building with renewables will require switching from conventional power plant sizing. We’ll be building multiples of peak demand to supply storage. PG&E’s microgrid near Yosemite ran on solar+storage all but 40 hours last year – 99.5% supply of demand. The last 0.5% was using a propane generator. This has already been proven out.

      • Very True. The first weeks of December were Rainy in California and shorter days, in the northern hemisphere. However, if we depended on fossil fuels for just 4 months of the year and solar and wind for the other 8 months of the year, we would make a significant difference to the 100% fossil fueled power we now depend on for 12 months of the year. utility owned power plants do not make money when left unused for 8 months of the year when wind and solar could potentially replace them. If utilities could make a reasonable amount from both commercial and rooftop solar to allow the idling of fossil fueled power plants, then the utility could at least profit from the renewable energy recourses. Taking it away only from only residential rooftop solar and not from every other source will kill grid tied rooftop solar. Most customers will turn their homes into micro-grids and generate and store energy to power their homes for 8 months a year then just buy needed power for 4 months a year to charge up their batteries when the sun is not shining bright.

  5. My biggest fear is that AI tools like ChatGPT that rely on crowdsourcing for content will be manipulated like the Microsoft AI tool that turned into a bigoted racist quickly. We’ve seen manipulation of crowdsourced information through social media by the Russians. Why should this be so different.

    Personal computers also were supposed to accelerate productivity. But after a brief spurt in the 1980s, it actually slowed.

    I think we’ve become too focused on productivity growth as our current painless escape hatch. It’s probably time to turn to redistribution to correct the widening wealth gap and to deemphasize consumerism to propel the economy.

  6. I had an epiphany at the statement “AI will drive changes in productivity” First, just “changes in”, not “increase” nor “beneficial”. Definitely food for thought.

    But let us assume “increase” because we all think that change can be good and “increase” is good. Productivity, as I understand it, is the measure of goods and services as a function of labor. Increasing productivity means more goods and services with the same or less labor. Or fewer goods and services with even less labor. Both “goods” and “services” require resources.

    Alas, we live on a finite planet with an increasing population.

    If we increase productivity, it means we deplete resources exponentially faster. This is my epiphany. But then, it could also mean that we reduce consumption and work a lot, lot less.

    So the question I would ask the all-knowledgeable AI would be, “What steps should humans take to mitigate climate change and create a sustainable world?”

    I think we already know the answer, we just don’t want to step up to our responsibility.

  7. To err is human. It takes a computer to really screw up. The problems that ‘might’ arise when we go AI will, at least initially, be potentially massive. Such as nationwide [grid wide] when a bird crashes into a solar panel. Like the FAA shutdown last week.

%d bloggers like this: