Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hey, just in case people here don't know, I am BEGGING you, please stop using AI of any kind. Whether that's ChatGPT, Grok, Gemini, Copilot, image generators, AI girlfriends, all of it. Not just for the reasons you may have heard like plagiarism, mistakes, or stealing from artists, but environmental reasons. The data centers that power these AIs are poisoning the communities they're built in and drawing massive amounts of water away from those who need it when we're already in a water crisis. And please tell this to others if you see them using AI. I know it can be awkward and intimidating to bring this up to people, you feel like you're being annoying, or a killjoy, or some self-righteous SJW, but this is about OUR future on OUR planet. Is saving yourself 5 seconds of embarrassment worth that? If you're like me and take shorter showers, turn off the lights when you leave the room, recycle everything you can, then stop using AI. It not only cancels out all of what you did to help, but contributes to climate change and pollution even more than before. Often times, they don't know how bad it is and just learning is all they need to stop. There are some mini documentaries you can watch that explain what's going on, like those from More Perfect Union: https://youtu.be/3VJT2JeDCyw https://youtu.be/jjkaYyysYhA
youtube AI Moral Status 2025-10-15T17:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzsvwsExVX3HgjWIl54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxOvX3zqNSfFZFubDt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzy6GwSEvuubZMG6x94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzY04sq-CcrNdUNeFd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwOr18gGdH9kNZl72t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy9XcChXRCxEbidhrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxA2_6XG5UmRnhUYw54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxnn_kBPlDC_skehqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxV1JvDMrthTK-ZKUB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzcJ1bPucf9IFbc-i94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]