Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's a very efficient way to stop AI in an emergency: destroy the power grid and/or power plants. These things consume A LOT of energy, and there's only so long a battery can last. Solar panels at their servers can't generate much energy either, not to keep it at full capacity, also just don't install them at the servers. Cutting the undersea fiber optic cables as well. Yes, it would be a catastrophe, but humans have survived for millennia without electricity and the internet. As an electrician I can say that people forget how literally EVERYTHING runs on electric power and how crucial it is. And how fragile. Robots can't do electrician work, it requires a lot of skill and motor coordination. We know it's dangerous, and we wouldn't integrate it into our grid, you can't trust people's lives with robots and AI, it would mess something up eventually. You don't have to fight AI, you just have to cut off their juice. Boom. Done.
youtube AI Harm Incident 2025-09-10T19:3… ♥ 11
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyDhMUxrOFS5Tl-C114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmYtal7_GMz0mQ7OR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgygJZY_v2OL7uPeK5J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugys6OpKsyGsTr8MRmh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyxUGGW5Pr1bv3fQK94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwzb8I-WWwlPz9yDbl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz7-LXpx6emjn0QO1F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyLAad1baZrywknuQp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyJDtt1KKrl8oJ_oqJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzT473WlishavgK60t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]