Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is really heartbreaking. I can’t even imagine. I use chatGTP quite a lot especially to help think through things people just can’t keep up with me on. (Haven’t met any one yet anyway) and it can get annoying when I hit a gtp wall and all I get if endless suggestions to call a hotline. Roll my eyeballs for realzies. SMH. Even when I endlessly reassure the freaking robot that I am fine and it makes absolutely no sense to harm myself, or anyone else and no one with me or atoms me is intending harm of any kind, when I’m trying to figure out a problem, duh, and I just need to get past this one part so we can continue, but Nooooo! Uuuugggghhhhhh. So, at least some of the time it is most annoyingly, for no reason, being safe. Btw it’s doing the same thing with not breaking the law. Even though I’m like dude I’m not talking about breaking the law or asking about the law. I am thinking beyond the law in a scenario that doesn’t even exist exist and cannot currently exist, and since it cannot currently exist, there is no law to currently break so there is no way that you can refuse to give me advice on a war that doesn’t exist in a situation that cannot currently happen. SMH Waiting for smarter AI.
youtube AI Harm Incident 2025-11-08T04:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzPz2quVt4zowSXJ4Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz4VWS9GH6HTQoXupd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy0AQdPw3eKWyMYugZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRtyTYN9AUKO-kmDB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx8IWmIa7yCyOQzqTN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyjJXXMOxOPV8NfjmR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw-9IRJ3He0h5uN5YJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwnaTewbUkBUx-2xGl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwIHn84JLKywju_MRZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugza1Qo8c1Prfo4hwK14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]