cross-posted from: https://lemmy.world/post/37715538
As you can compute for yourself, AI datacenter water use is not a substantial environmental problem. This long read spells out the argument numerically.
If you’d like a science educator trying to make the headline claim digestible, see here
Expanding on this: Even if we take the absurd values of LLM growth from the industry, current and projected freshwater use of AI datacenters will still be small compared to other obviously wasteful uses. This is especially true if you restrict to inference, rather than training, resource use. Once a company has already trained one of these monster-models, using it to respond to a content-free work email, cheat on homework, lookup a recipe, or help you write a silly html web page is usually freshwater savings, because you shower and use the toilet surprisingly often compared to the cooling needs of a computer.
I will acknowledge the nuance I’m aware of:
- we don’t know the specific tech of the newest models. It is theoretically possible they’ve made inference require burning several forests down. I think this is extremely unlikely, given how similar they behave to relatively benign mixture-of-experts models.
- some of the numbers in the linked long-read are based on old projections. I still think they were chosen generously, and I’m not aware of a serious discrepancy in favor of 'AI water use is a serious problem". Please do correct me if you have data.
- there is a difference between freshwater and potable water. Except that I can’t find anyone who cares about this difference outside of one commenter. As I currently understand it, all freshwater can be made potable with relatively upfront investment.
(Please note this opinion is not about total energy use. Those concerns make much more sense to me.)



You don’t seem familiar with how legislation gets passed. Thousands of hours are wasted on showboating, virtue signalling, soapboxing, and pandering on any random, often misdirected or pointless topic. You’re arguing for ideal treatment in a system where ideal conditions will never exist.
You’re right. I’m trying to justify my desire that people believe true things, and that’s silly of me. Editing OP.
Some particular things being “true” is not some absolute and limited set of facts that encompasses all relevant information about any given topic. You can know a lot about the truth of a particular issue but be completely unaware of a greater context that makes that knowledge moot or even detrimental to focus on in neglect of the greater picture. Your desire for people to believe true things is actually silly because observed patterns would indicate that they likely won’t. But even more, they’ll believe their own “true things,” the truths or “truths” that they choose to focus on and value. Shouting like Willy Loman’s wife will never get the attention you want. And it’s entirely possible that your focus is dictated by your own bias because you don’t want to accept valid criticism of something you value.
Somehow I don’t think that “I want people to make arguments based in reality” is an unpopular opinion though. Subjectivity included. Is there some other community we should put that thread?
You are subjective in your perception of reality and therefore what you perceive as reality isn’t necessarily going to coincide with the perception of reality of other people so pretending that your perception is the one true set of relevant perceived truths is just your bias. So when you say you want people to make arguments based in reality, you’re only referring to your own perception, not the greater picture.
But even this argument is irrelevant. Your defensiveness to every comment in this thread indicates that you’re not open to criticism, you’re possibly looking for an argument rather than other perspectives, and you’re likely disinclined to change your perspective based on feedback because you’re not asking questions, only arguing with responses.
You are right I’ve been feeling defensive. But I also don’t understand why you say discussion with me is pointless. In response to earlier, entirely correct, comments I’ve edited the OP to remove a bad argument that I had made. I removed it because it wasn’t honest or correct. Is that also defensive behavior?
If I extend this argument about subjectivity, I think the strongest thing it could argue is that “it doesn’t matter that people believe false things about the magnitude of AI water use”. Would you say that’s correct?