cross-posted from: https://lemmy.world/post/37715538
As you can compute for yourself, AI datacenter water use is not a substantial environmental problem. This long read spells out the argument numerically.
If you’d like a science educator trying to make the headline claim digestible, see here
Expanding on this: Even if we take the absurd values of LLM growth from the industry, current and projected freshwater use of AI datacenters will still be small compared to other obviously wasteful uses. This is especially true if you restrict to inference, rather than training, resource use. Once a company has already trained one of these monster-models, using it to respond to a content-free work email, cheat on homework, lookup a recipe, or help you write a silly html web page is usually freshwater savings, because you shower and use the toilet surprisingly often compared to the cooling needs of a computer.
I will acknowledge the nuance I’m aware of:
- we don’t know the specific tech of the newest models. It is theoretically possible they’ve made inference require burning several forests down. I think this is extremely unlikely, given how similar they behave to relatively benign mixture-of-experts models.
- some of the numbers in the linked long-read are based on old projections. I still think they were chosen generously, and I’m not aware of a serious discrepancy in favor of 'AI water use is a serious problem". Please do correct me if you have data.
- there is a difference between freshwater and potable water. Except that I can’t find anyone who cares about this difference outside of one commenter. As I currently understand it, all freshwater can be made potable with relatively upfront investment.
(Please note this opinion is not about total energy use. Those concerns make much more sense to me.)



The crux is this:
Humans are not consuming more water than they otherwise would to do their homework or respond to a work email. Humans require X water no matter what they’re doing. Those tasks do not add extra water consumption. Using the AI to do them does add extra water consumption.
Humans can, therefore, do those tasks for 0 net water usage. (Doing those tasks consumes X water; not doing the tasks also consumes X water, therefore the net water ‘cost’ of doing those tasks is 0.) AI can do those tasks for Y net water usage, which is a value > 0. Therefore, yes, ostensibly you’re getting more work done in less time, but the net water usage is higher. Since the water usage for the human to do the tasks is 0, any amount of water usage - no matter how much productivity it adds - is increasing the water per unit of work, because it is a number greater than 0.
Your point wasn’t about AI enabling humans to get more work done in less time, it was about AI using less water than humans (presumably to do the same amount of work), which is simply false.
I see. So what I should have written is “a more efficient use of fresh water” instead of “a freshwater savings”? Would that change address the point you are making?
It’s not even a more efficient use of water. It’s more work in less time (ostensibly) at a higher water per work expenditure.
I don’t see why the above argument implies a higher water per work expendature?
The human is consuming X water whether they’re doing the work or not. That’s a constant. There’s certainly activities that would require consuming more water, but they’re not the things AI is helping with. If the human is doing the work themselves, they are consuming X water. If the human is not doing the work, they are consuming X water. Therefore, the water consumption attributable to the work being done is X - X, or 0. The water per unit of work in this case is 0 / Z (where Z is the amount of work being done), or, 0.
If the AI is doing the work, the human is still consuming X water (no change), but the AI is consuming Y water, which is a value greater than 0. Therefore, the water consumption attributable to the work being done is Y / Z, where Z is the amount of work being done.
Since Y is a value greater than 0, the result of Y / Z is higher than the result of 0 / Z.
Conclusion: Having the AI do the work requires a greater water per work expenditure (Y/Z) than having humans do it (0/Z).
(I think you’ve also done something sneaky mathematically; the units of your numerator are ‘change in freshwater use from leaving the human alive’, but your units on the denominator are ‘change in work from the human not existing at all’. I think the two units should try to align; either both assume the human not existing at all, or both assume the human. I’ve been taking the first set of units, the second set of units would compare 0/0 with Y/(w/e the human does instead of Z), which seems less insightful.)
Thank you, I understand your argument. I think we should complicate the model ever so slightly, because the human will exist regardless and does something with that extra time. Suppose there are two tasks, instead of just one. The first task is as we’ve described, the second task is something the human would prefer to do, but cannot do until the first task is done. Let’s say the tasks are comparable; both contribute Z to work done (in general we would have Z and Z’).
Without AI, the water use / work done is X/Z.
With AI, the water use / work done is (X+Y)/(2Z).
The second ratio is smaller whenever Y < X, thus in this case the AI has made our freshwater use more efficient.
We can certainly discuss which model is more accurate/typical, I would welcome such. Do you feel this model of ‘total water use / total work done’ is fair? Generally, I put a lot of value on work that people want to do, and not all that much value on work that people would rather give to AI, so usually Z << Z’, and I think the efficiency gain is rather large (this includes things we don’t normally call work, like self care).
Let’s look at the examples you gave:
I’m not sure what a “content-free” work email is in this context but any work email that I’d even consider trusting to AI is something that would take me under a minute to write by hand anyway. There’s no way I’d trust anything more complex to AI. Is this not the case for you?
If you’re using AI to cheat on homework so you can use the time you save to study in another manner that’s more conducive to your learning methods, I could maybe see this being a benefit, but if you’re using AI to do your homework so you can e.g. play a videogame, you’re really just doing yourself a disservice. If we’re assuming the homework is something you already know how to do and require no additional practice on, and it’s simple enough that AI can handle it without oversight, I suppose this could be a use case?
It takes no more than a minute to look up a recipe, and chances are good that the recipe you find will be more reliable than what AI tells you to do. It’s notoriously bad at this task, and while it’s obviously not the norm, there’s been a non-zero number of cases where it’s told someone to do something that would be fatal if the recipe was followed.
Of the items on your list, this is the one I think has the highest chance of actually saving you some time, albeit very little. You’ll still need to write or provide the content, but you could probably save 10-30 minutes of writing HTML assuming you’d otherwise be doing it all by hand and what you’re trying to do is appreciably complex enough that it would take you that long while not being complex enough that you’ll waste time trying to explain to the AI what you want it to look like.
Anyway, the point is, it’s not like any of these tasks are something you’re going to entrust to AI and save appreciable time to dedicate to another task. Even if we charitably grant that the human has freed up substantial time that they can dedicate to something else, that’s outside of the scope of the original argument.
The argument I was responding to was that using AI for these tasks represents a water savings, not that it represents a time savings or efficiency gain.
If that’s no longer your argument, maybe it would help if you re-stated your revised position, so we’re both arguing from the same starting point?
Took a day to think it through carefully. My position is that for many AI use cases, it is more efficient to spend the freshwater having the AI do a task than having a person do that same tasks. Equivalently, the total freshwater spent on entities doing the tasks will be lower if the AI does them than if we have people do them.
I believe you’ve convincingly argued that this means more things will get done, not less water will be spent. I think that’s consistent with my current position, and I agree the OP does not make this distinction very well. My particular question was asking how I should update the OP to respond to your argument, as a way to check that I comprehended correctly.
Your most recent post makes the point that tasks AI are often used for are small, easily done even cheaper, or poor fits for the technology. I think there’s probably some useful discussion that could be had here, though I might suggest we focus on what people seem to actually be doing (instead of my vibes from personal interactions). I recently learned about this paper OpenAI posted recently. What I extracted on a quick skim:
Speaking from experience, getting tone + politeness + clarity in business writing is hard for me (as you can probably tell). Plenty of emails where what needs to be said is simple, obvious, and short, but I will spend 3 hours agonizing over the wording. Rejection sampling what an LLM produces for such a socially anxious person is likely faster and (thus) more water efficient for that task. I think this is a pretty solid and common AI use case.
While there are many tutorials, guides, and discussions on how to do various things (like recipes), I think crawling through SEO text can be hard and frustrating for some people. I want a simple peanut sauce recipe for the ratios. You can either query the LLM and get an answer in 10 seconds, or you can search and weed through 4 much longer, worse written, and likely also LLM generated slobs for the same (Why 4? Because each one will be focusing on some sponsored extra ingredient that probably throws off the ratio). I think this is also both a common AI use, and more efficient if you are not excellent at managing SEO text already.
(Should it matter, I’m dramatizing the previous paragraphs a fair bit. I’ve used AI for ~3 emails, and ~2 recipes. Both times only after trying the usual way and getting terrible results.)
According to the paper you’re referencing, the most common use case is practical guidance. I’d argue that that directly opposes this statement. Those activities are actively engaging both the AI and the human, so however much freshwater it would take for the human to do independent research or whatever is appropriate on the topics they’re asking AI about is still being used by the human using the AI, but the AI’s water use is occurring in addition to that.
Same goes for “seeking information”, the second most common use case. This one I suppose comes down to how the AI is being used. If someone is asking the AI a question and taking the response they get at face value and doing nothing further, they will invariably spend less time than doing independent research, however the quality of that result is roughly equivalent to just typing it into a search engine and trusting whatever the top result is, which is also a very low time consuming task. In either case, the human is engaged during the whole process, so the AI is adding additional water usage.
In the case of writing / editing / translating, the AI is probably doing the task appreciably faster than the human would and I could perhaps see your stance holding true.
For fiction generation, I assume they’re talking about having the AI write something for the user’s consumption (e.g. roleplaying with the AI)… the examples they give are “Crafting poems, stories, or fictional content”. Is reading AI generated fiction really any better than reading a book? Because reading a book is certainly going to consume less water than having the AI write that fiction. I don’t see the appeal in AI-generated fiction personally, so I might not understand the common use case here.
I’ll also add as a tangential point that this only accounts for AI use that’s intentional and targeted (e.g. asking ChatGPT a question). If you also consider all of the “involuntary” AI use - for example, AI generated entries at the top of search results when none were requested or wanted - there’s a quantity of resources - not only water, but power, as well, which I think is the bigger concern overall, particularly in the US right now - being spent for zero benefit.
Regarding your points about the time that would otherwise be spent writing emails or looking up recipes, if that’s an accurate representation of how much time you spend on those tasks, I can at least concede that using AI to accomplish them is saving you a considerable amount of time. I think you’re in a stark minority in the amount of time you spend on those tasks, however.
One issue with AI generated recipes that I will point out is that the AI doesn’t actually know how to make that thing, it’s just compiling what it thinks is a reasonable recipe based on the recipes it has been trained with. Even if we assume that the ingredient quantities make sense for what you’re making, chances are the food will taste better - particularly for complex dishes - if you’re using a recipe curated by humans rather than an AI approximation.