Tech Giants Share AI Environmental Costs, but Gaps Remain

Google reports Gemini prompts use minimal energy and water, but experts criticize incomplete methods hiding true footprint.

Google’s Numbers for Gemini

Google has published an analysis of how much power and water its Gemini chatbot uses. The company says that a single text prompt requires about 0.24 watt-hours of electricity, 0.26 milliliters of water, and creates the equivalent of 0.03 grams of carbon dioxide. By its measure, this is about the same as running a television for nine seconds.

The report highlights large efficiency improvements over the past year. Google claims it has cut the electricity needed per prompt by more than thirty times since mid-2024, while emissions tied to each request have dropped at a similar pace.

Mistral’s Higher Figures

French startup Mistral published its own assessment earlier this summer. For its “Le Chat” assistant, a typical response of about 400 tokens uses 50 milliliters of water and produces more than one gram of carbon dioxide. The company also included information about training. Building its Large 2 model was said to release over 20 kilotons of carbon dioxide and require more than 280,000 cubic meters of water, close to the volume of one hundred Olympic swimming pools.

What Experts Say Is Missing

Specialists in energy and computing argue that the reports are incomplete. In Google’s case, the water figure covers only the cooling systems inside its data centers. It does not account for the far larger volumes tied to electricity production, since power plants also rely heavily on water for cooling and steam. Analysts point out that leaving out this factor hides a major part of the impact.

Another concern is how emissions are measured. Google used a market-based method, which takes into account the renewable energy it invests in. A location-based method, which reflects the actual mix of power sources in the grid where a data center runs, would often show higher values. Critics say that without this, the report gives only part of the picture.

Different Methods, Different Outcomes

Google says its numbers are based on the median prompt to avoid skew from extreme cases that use unusually high resources. It has not provided token counts or typical word lengths for those prompts. Earlier academic studies relied on averages and included both direct and indirect water use, which led to far higher numbers, in some cases more than 50 milliliters per request.

Mistral’s study, while narrower, urged the industry to move toward common reporting standards. It suggested that clearer comparisons could help buyers and developers pick models with lower environmental costs.

Broader Trends in AI Use

Efficiency gains, while real, do not always translate into lower overall demand. As systems get cheaper and faster to run, people tend to use them more, which raises total consumption. Google’s sustainability report shows this effect. Even as Gemini became more efficient, the company’s total carbon emissions increased. Since 2019, its footprint has risen by more than half, largely due to the growing use of AI services.

Independent estimates underline the uncertainty. One outside analysis found that a query to OpenAI’s GPT-4o uses about 0.3 watt-hours of electricity, slightly more than Google’s figure for Gemini. Actual impact depends on model size, type of output, and which power grid handles the request.

A Partial Accounting

The reports from Google and Mistral provide an early view of AI’s environmental costs. They show that queries can appear small in isolation but raise bigger questions at scale. Without independent audits, consistent metrics, and full inclusion of indirect effects, the true footprint of artificial intelligence remains unsettled.


Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: AI vs. SEO: How AI-Powered Search is Changing the Way We Find Content in 2025

Previous Post Next Post