We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next.
This isn’t simply the norm of a digital world. It’s unique to AI, and a marked departure from Big Tech’s electricity appetite in the recent past. From 2005 to 2017, the amount of electricity going to data centers remained quite flat thanks to increases in efficiency, despite the construction of armies of new data centers to serve the rise of cloud-based online services, from Facebook to Netflix. In 2017, AI began to change everything. Data centers started getting built with energy-intensive hardware designed for AI, which led them to double their electricity consumption by 2023.
it’s estimated that training OpenAI’s GPT-4 took over $100 million and consumed 50 gigawatt-hours of energy, enough to power San Francisco for three days.
As conversations with experts and AI companies made clear, inference, not training, represents an increasing majority of AI’s energy demands and will continue to do so in the near future. It’s now estimated that 80–90% of computing power for AI is used for inference.
This seems to be a common misconception, that the energy impact of AI is a one-time cost during training
In reality, the type and size of the model, the type of output you’re generating, and countless variables beyond your control—like which energy grid is connected to the data center your request is sent to and what time of day it’s processed—can make one query thousands of times more energy-intensive and emissions-producing than another.
But even if researchers can measure the power drawn by the GPU, that leaves out the power used up by CPUs, fans, and other equipment. A 2024 paper by Microsoft analyzed energy efficiencies for inferencing large language models and found that doubling the amount of energy used by the GPU gives an approximate estimate of the entire operation’s energy demands.
So model size is a huge predictor of energy demand. One reason is that once a model gets to a certain size, it has to be run on more chips, each of which adds to the energy required. The largest model we tested has 405 billion parameters, but others, such as DeepSeek, have gone much further, with over 600 billion parameters. The parameter counts for closed-source models are not publicly disclosed and can only be estimated. GPT-4 is estimated to have over 1 trillion parameters.
But in all these cases, the prompt itself was a huge factor too. Simple prompts, like a request to tell a few jokes, frequently used nine times less energy than more complicated prompts to write creative stories or recipe ideas.
Generating a standard-quality image (1024 x 1024 pixels) with Stable Diffusion 3 Medium, the leading open-source image generator, with 2 billion parameters, requires about 1,141 joules of GPU energy. With diffusion models, unlike large language models, there are no estimates of how much GPUs are responsible for the total energy required, but experts suggested we stick with the “doubling” approach we’ve used thus far because the differences are likely subtle. That means an estimated 2,282 joules total. Improving the image quality by doubling the number diffusion steps to 50 just about doubles the energy required, to about 4,402 joules. That’s equivalent to about 250 feet on an e-bike, or around five and a half seconds running a microwave. That’s still less than the largest text model.
But three months later the company launched a larger, higher-quality model that produces five-second videos at 16 frames per second (this frame rate still isn’t high definition; it’s the one used in Hollywood’s silent era until the late 1920s). The new model uses more than 30 times more energy on each 5-second video: about 3.4 million joules, more than 700 times the energy required to generate a high-quality image. This is equivalent to riding 38 miles on an e-bike, or running a microwave for over an hour.
Let’s say you’re running a marathon as a charity runner and organizing a fundraiser to support your cause. You ask an AI model 15 questions about the best way to fundraise.
Then you make 10 attempts at an image for your flyer before you get one you are happy with, and three attempts at a five-second video to post on Instagram.
You’d use about 2.9 kilowatt-hours of electricity—enough to ride over 100 miles on an e-bike (or around 10 miles in the average electric vehicle) or run the microwave for over three and a half hours.
Gaps in power supply, combined with the rush to build data centers to power AI, often mean shortsighted energy plans. In April, Elon Musk’s X supercomputing center near Memphis was found, via satellite imagery, to be using dozens of methane gas generators that the Southern Environmental Law Center alleges are not approved by energy regulators to supplement grid power and are violating the Clean Air Act.
This variability means that the same activity may have very different climate impacts, depending on your location and the time you make a request. Take that charity marathon runner, for example. The text, image, and video responses they requested add up to 2.9 kilowatt-hours of electricity. In California, generating that amount of electricity would produce about 650 grams of carbon dioxide pollution on average. But generating that electricity in West Virginia might inflate the total to more than 1,150 grams.
In December, OpenAI said that ChatGPT receives 1 billion messages every day, and after the company launched a new image generator in March, it said that people were using it to generate 78 million images per day, from Studio Ghibli–style portraits to pictures of themselves as Barbie dolls.
One billion of these every day for a year would mean over 109 gigawatt-hours of electricity, enough to power 10,400 US homes for a year. If we add images and imagine that generating each one requires as much energy as it does with our high-quality image models, it’d mean an additional 35 gigawatt-hours, enough to power another 3,300 homes for a year. This is on top of the energy demands of OpenAI’s other products, like video generators, and that for all the other AI companies and startups.
We will give complex tasks to so-called “reasoning models” that work through tasks logically but have been found to require 43 times more energy for simple problems, or “deep research” models that spend hours creating reports for us.
Every researcher we spoke to said that we cannot understand the energy demands of this future by simply extrapolating from the energy used in AI queries today. And indeed, the moves by leading AI companies to fire up nuclear power plants and create data centers of unprecedented scale suggest that their vision for the future would consume far more energy than even a large number of these individual queries.
By 2028, the researchers estimate, the power going to AI-specific purposes will rise to between 165 and 326 terawatt-hours per year. That’s more than all electricity currently used by US data centers for all purposes; it’s enough to power 22% of US households each year. That could generate the same emissions as driving over 300 billion miles—over 1,600 round trips to the sun from Earth.
The Lawrence Berkeley researchers offered a blunt critique of where things stand, saying that the information disclosed by tech companies, data center operators, utility companies, and hardware manufacturers is simply not enough to make reasonable projections about the unprecedented energy demands of this future or estimate the emissions it will create. They offered ways that companies could disclose more information without violating trade secrets, such as anonymized data-sharing arrangements, but their report acknowledged that the architects of this massive surge in AI data centers have thus far not been transparent, leaving them without the tools to make a plan.
When you ask an AI model to write you a joke or generate a video of a puppy, that query comes with a small but measurable energy toll and an associated amount of emissions spewed into the atmosphere. Given that each individual request often uses less energy than running a kitchen appliance for a few moments, it may seem insignificant.
But as more of us turn to AI tools, these impacts start to add up. And increasingly, you don’t need to go looking to use AI: It’s being integrated into every corner of our digital lives.
Crucially, there’s a lot we don’t know; tech giants are largely keeping quiet about the details. But to judge from our estimates, it’s clear that AI is a force reshaping not just technology but the power grid and the world around us.