cross-posted from: https://lemmy.world/post/37715538
As you can compute for yourself, AI datacenter water use is not a substantial environmental problem. This long read spells out the argument numerically.
If you’d like a science educator trying to make the headline claim digestible, see here
Expanding on this: Even if we take the absurd values of LLM growth from the industry, current and projected freshwater use of AI datacenters will still be small compared to other obviously wasteful uses. This is especially true if you restrict to inference, rather than training, resource use. Once a company has already trained one of these monster-models, using it to respond to a content-free work email, cheat on homework, lookup a recipe, or help you write a silly html web page is usually freshwater savings, because you shower and use the toilet surprisingly often compared to the cooling needs of a computer.
I will acknowledge the nuance I’m aware of:
- we don’t know the specific tech of the newest models. It is theoretically possible they’ve made inference require burning several forests down. I think this is extremely unlikely, given how similar they behave to relatively benign mixture-of-experts models.
- some of the numbers in the linked long-read are based on old projections. I still think they were chosen generously, and I’m not aware of a serious discrepancy in favor of 'AI water use is a serious problem". Please do correct me if you have data.
- there is a difference between freshwater and potable water. Except that I can’t find anyone who cares about this difference outside of one commenter. As I currently understand it, all freshwater can be made potable with relatively upfront investment.
(Please note this opinion is not about total energy use. Those concerns make much more sense to me.)



The math still checks out for most large models, amortizing training over use. I’ll believe the long read doesn’t need this either.
And what’s more, I think there are tons of people who believe the weaker thing! You can easily find people claiming that using GPT for their email is ‘destroying our freshwater’ or some similar thing. Apparently it’s still an unpopular opinion.
Like I said, when you define your question such that the problem everyone is citing isn’t a part of it, OFC you’ll get a positive result.
Read it twice and failing to parse. What do you mean?
An hour ago you edited it, but now you’re telling someone to read it again when the content has changed? That’s a disingenuous tactic.
No, I’m saying I read the earlier post by someone and don’t understand their new words. I am not asking anyone to re read my words. Apologies if that was unclear.