Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's an arms race. We're going to need AI to counteract other AIs. The problem…
ytc_UgyEauY7U…
G
I don't think A.i is inherently bad for the people it's how it's used and govern…
ytc_Ugy2ZDTGa…
G
I completely understand and yes I do agree with your concern...
these Ai not th…
ytc_UgxOeSEX8…
G
there's no way government is going to listen to concerns about A.I. Wow what an …
ytc_UgyHQHTjb…
G
How to notice deep fake
If anyone's eyes is moving slowly then it's a deep fake…
ytc_UgyEgM6ox…
G
Then can it hurry up!
I'm bored of being here.
But I don't want to to mess up my…
ytr_Ugx_Qvr1-…
G
"Ai is batter than an artist's best artwork"
WRONG!
With WHAT was gen Ai traine…
ytc_UgyxNE2jH…
G
That AI mind cloud he's talking about at 20:20 - might as well just call it a …
ytc_UgxItMLnO…
Comment
the water consumption thing seems insane to me. I have large language models running on my PC and have air cooling... Why would ChatGPT drink like a fish when mine can churn out text 24/7 with a standard GPU? Also the water is cooled and cycled around a system, it's not vanishing!
fyi: AI Model Cooling and Water Use
Large data centres, where models like ChatGPT run, often use water-based cooling systems to manage heat from servers. Water is circulated, absorbs the heat, and is then either evaporated (in some systems) or recirculated after cooling.
The high water consumption figures typically reported refer to two scenarios:
Evaporative Cooling: Some systems allow water to evaporate to cool the air, which means some water is indeed "lost" and must be replenished.
Energy Generation: A significant part of the water consumption is indirect, coming from power plants that use water in the generation process (e.g., cooling turbines or producing steam).
In contrast, your PC uses air cooling, which doesn’t consume water at all. Even if you were to use a water-cooled setup, it would be a closed-loop system, where water circulates without significant loss.
Efficiency Difference
The scale of operations makes the difference. Your local GPU model is efficient because:
You're running it on a single device designed for low energy consumption.
You're not dealing with the networking and infrastructure overheads of serving millions of users simultaneously.
Large models like ChatGPT operate across massive server farms, requiring:
Enormous computing power to serve users globally.
Redundant systems to handle peak loads and ensure uptime.
Cooling to manage heat from densely packed hardware racks.
Why the Water Use Seems "Excessive"
When aggregated across data centres worldwide, the water use becomes substantial. The scale is incomparable to personal GPU setups.
Misconceptions About Water Vanishing
You're correct: water doesn’t "disappear." It often returns to the environment, either as vapour or as cooled water. However:
Evaporative losses can deplete local water sources, especially in drought-prone areas.
Thermal pollution from discharging warmer water can affect aquatic ecosystems.
Is It Insane?
Not insane, but it highlights the environmental cost of scaling AI to millions of users. Solutions are being explored:
Switching to more efficient cooling systems (e.g., liquid immersion cooling).
Siting data centres near renewable energy sources and regions with abundant water.
Optimising models to use less computational power.
In short, your home setup is a shining example of efficiency, but scaling that efficiency to global operations remains a challenge.
youtube
AI Moral Status
2025-01-04T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzHSpq9pIzhX-z_HxF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxIMeCfPPHp-OhTR494AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxomaAGSw0Cskrc_FB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzMqe4D5Ys7g5j7REZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgweLMr7NWMVxvEFRk14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw675PW_tAZwQsmb614AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTiXLidk1JagNoBT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzTyNQ8xTpC04L8IuZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJ_IjIywhBzuKGmmF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQIM5JToQgoYyYMNN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}
]