cross-posted from: https://lemmy.world/post/37715538
As you can compute for yourself, AI datacenter water use is not a substantial environmental problem. This long read spells out the argument numerically.
If you’d like a science educator trying to make the headline claim digestible, see here
Expanding on this: Even if we take the absurd values of LLM growth from the industry, current and projected freshwater use of AI datacenters will still be small compared to other obviously wasteful uses. This is especially true if you restrict to inference, rather than training, resource use. Once a company has already trained one of these monster-models, using it to respond to a content-free work email, cheat on homework, lookup a recipe, or help you write a silly html web page is usually freshwater savings, because you shower and use the toilet surprisingly often compared to the cooling needs of a computer.
I will acknowledge the nuance I’m aware of:
- we don’t know the specific tech of the newest models. It is theoretically possible they’ve made inference require burning several forests down. I think this is extremely unlikely, given how similar they behave to relatively benign mixture-of-experts models.
- some of the numbers in the linked long-read are based on old projections. I still think they were chosen generously, and I’m not aware of a serious discrepancy in favor of 'AI water use is a serious problem". Please do correct me if you have data.
- there is a difference between freshwater and potable water. Except that I can’t find anyone who cares about this difference outside of one commenter. As I currently understand it, all freshwater can be made potable with relatively upfront investment.
(Please note this opinion is not about total energy use. Those concerns make much more sense to me.)
Once a company has already trained one of these monster-models, using it to respond to a content-free work email, cheat on homework, lookup a recipe, or help you write a silly html web page is usually freshwater savings, because you shower and use the toilet surprisingly often compared to the cooling needs of a computer.
How does this make sense? It’s not like the AI is using water instead of a human showering or using the toilet, it’s happening in addition to the human usage. Having AI to help you cheat on your homework doesn’t mean you’re showering less… does it?
Once a company has already trained one of these monster-models, using it to respond to a content-free work email, cheat on homework, lookup a recipe, or help you write a silly html web page is usually freshwater savings…
If these tasks are so resource light, then what are all these new data centers being built for?
I think you might’ve responded to the wrong post; you’re quoting OP, not me. (Or rather, I was quoting OP, and you’re quoting the same quote.)
The human does indeed keep living, and they do other tasks in the meantime (usually). So the human can spend 4 gallons doing busywork and another 4 gallons going on a date, or 4.1 gallons doing busywork and going on a date with AI.
This is a false premise. No matter how much work AI ostensibly does / how much time it saves the human, the human still exists during that saved time, is still doing other things and, most notably, is still consuming water during that time.
Excuse me, everyone knows you don’t have to shower when you date an AI. It’s the infinite water saving loophole.
My most charitable read of what you are saying is this:
There is a human, who will exist no matter what. They consume X water.
There is a task, which will take the humans whole X water (which we can convert to water) for the human to do, or Y < X water for an llm to do.
If the human does the task, a total of X water is used. If the AI does the task, X + Y water is used.
Is this correct?
Assuming so: I think we should control for how much gets done. Our goal is not to minimize water use, our goal is to maximize stuff done / water use. I am using “savings” in the sense of saving time. Resources are freed up to do other things. There are, for example, other tasks that the AI is bad at but the human is quite good at. We should spend the X water on having the human make meaningful literature or going on dates or enjoying life, and let the AI write the pointless emails.
If I’ve misunderstood you, could you explain if there is also a false dilemma occurring when someone says “this better route to work saves time”, and why? They seem very analogous to me.
The crux is this:
Humans are not consuming more water than they otherwise would to do their homework or respond to a work email. Humans require X water no matter what they’re doing. Those tasks do not add extra water consumption. Using the AI to do them does add extra water consumption.
Humans can, therefore, do those tasks for 0 net water usage. (Doing those tasks consumes X water; not doing the tasks also consumes X water, therefore the net water ‘cost’ of doing those tasks is 0.) AI can do those tasks for Y net water usage, which is a value > 0. Therefore, yes, ostensibly you’re getting more work done in less time, but the net water usage is higher. Since the water usage for the human to do the tasks is 0, any amount of water usage - no matter how much productivity it adds - is increasing the water per unit of work, because it is a number greater than 0.
Your point wasn’t about AI enabling humans to get more work done in less time, it was about AI using less water than humans (presumably to do the same amount of work), which is simply false.
I see. So what I should have written is “a more efficient use of fresh water” instead of “a freshwater savings”? Would that change address the point you are making?
It’s not even a more efficient use of water. It’s more work in less time (ostensibly) at a higher water per work expenditure.
I don’t see why the above argument implies a higher water per work expendature?
Regulating new, marginal uses instead of pushing for broader reforms seems very backwards to me.
This is a false dilemma. There’s no reason to suggest that both cannot be pursued or that pursuing one is mutually exclusive with pursuing the other.
“Bank robbers steal thousands of dollars from banks, so arresting me for just picking your pocket for $20 instead of going after bank robbers seems very backwards to me.”
I think this is a real dilemma for lawmakers though. I don’t want floor time and lobby time spent on AI water use please. Do general water use.
You don’t seem familiar with how legislation gets passed. Thousands of hours are wasted on showboating, virtue signalling, soapboxing, and pandering on any random, often misdirected or pointless topic. You’re arguing for ideal treatment in a system where ideal conditions will never exist.
You’re right. I’m trying to justify my desire that people believe true things, and that’s silly of me. Editing OP.
Some particular things being “true” is not some absolute and limited set of facts that encompasses all relevant information about any given topic. You can know a lot about the truth of a particular issue but be completely unaware of a greater context that makes that knowledge moot or even detrimental to focus on in neglect of the greater picture. Your desire for people to believe true things is actually silly because observed patterns would indicate that they likely won’t. But even more, they’ll believe their own “true things,” the truths or “truths” that they choose to focus on and value. Shouting like Willy Loman’s wife will never get the attention you want. And it’s entirely possible that your focus is dictated by your own bias because you don’t want to accept valid criticism of something you value.
Somehow I don’t think that “I want people to make arguments based in reality” is an unpopular opinion though. Subjectivity included. Is there some other community we should put that thread?
You are subjective in your perception of reality and therefore what you perceive as reality isn’t necessarily going to coincide with the perception of reality of other people so pretending that your perception is the one true set of relevant perceived truths is just your bias. So when you say you want people to make arguments based in reality, you’re only referring to your own perception, not the greater picture.
But even this argument is irrelevant. Your defensiveness to every comment in this thread indicates that you’re not open to criticism, you’re possibly looking for an argument rather than other perspectives, and you’re likely disinclined to change your perspective based on feedback because you’re not asking questions, only arguing with responses.
You are right I’ve been feeling defensive. But I also don’t understand why you say discussion with me is pointless. In response to earlier, entirely correct, comments I’ve edited the OP to remove a bad argument that I had made. I removed it because it wasn’t honest or correct. Is that also defensive behavior?
If I extend this argument about subjectivity, I think the strongest thing it could argue is that “it doesn’t matter that people believe false things about the magnitude of AI water use”. Would you say that’s correct?
“… This is especially true if you restrict to inference, rather than training, resource use. …”
Wow, imagine that! If you put the problem outside of the domain of your question, there is no longer a problem! How profound!
The math still checks out for most large models, amortizing training over use. I’ll believe the long read doesn’t need this either.
And what’s more, I think there are tons of people who believe the weaker thing! You can easily find people claiming that using GPT for their email is ‘destroying our freshwater’ or some similar thing. Apparently it’s still an unpopular opinion.
Like I said, when you define your question such that the problem everyone is citing isn’t a part of it, OFC you’ll get a positive result.
Read it twice and failing to parse. What do you mean?
An hour ago you edited it, but now you’re telling someone to read it again when the content has changed? That’s a disingenuous tactic.
No, I’m saying I read the earlier post by someone and don’t understand their new words. I am not asking anyone to re read my words. Apologies if that was unclear.





