Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We can't even trust Google's search engine AI to give correct/accurate informati…
ytc_UgwLFGsvF…
G
Thank you for sharing your insights on Sophia's deeper meaning and significance …
ytr_Ugz40O4yw…
G
Humans have gone from hunting, to being farmers, to working in factories, to wor…
ytc_UgzUM-M2b…
G
If your job can be fully replaced by an AI, you have the wrong job 😂…
ytc_UgzJRYddA…
G
If people who barely speak English can do the job, you had to know it's only a m…
ytc_UgyoXTgPj…
G
AI is being trained to play “Optimal Capitalist.” That’s the problem. 1% of the …
ytc_UgxylVOUZ…
G
@sharkattack6670 - It’s like women are just finding out that men fantasize about…
ytr_UgywmRKEQ…
G
OH the Ai Artists are made cause some one doesn't want to use Ai Garbage?! Let t…
ytc_UgxSL21vN…
Comment
There's a very efficient way to stop AI in an emergency: destroy the power grid and/or power plants. These things consume A LOT of energy, and there's only so long a battery can last. Solar panels at their servers can't generate much energy either, not to keep it at full capacity, also just don't install them at the servers.
Cutting the undersea fiber optic cables as well. Yes, it would be a catastrophe, but humans have survived for millennia without electricity and the internet.
As an electrician I can say that people forget how literally EVERYTHING runs on electric power and how crucial it is. And how fragile. Robots can't do electrician work, it requires a lot of skill and motor coordination. We know it's dangerous, and we wouldn't integrate it into our grid, you can't trust people's lives with robots and AI, it would mess something up eventually. You don't have to fight AI, you just have to cut off their juice. Boom. Done.
youtube
AI Harm Incident
2025-09-10T19:3…
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyDhMUxrOFS5Tl-C114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmYtal7_GMz0mQ7OR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgygJZY_v2OL7uPeK5J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugys6OpKsyGsTr8MRmh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyxUGGW5Pr1bv3fQK94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwzb8I-WWwlPz9yDbl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz7-LXpx6emjn0QO1F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLAad1baZrywknuQp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyJDtt1KKrl8oJ_oqJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzT473WlishavgK60t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]