Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm reminded of a quote by Rob Miles in his video "Intelligence and Stupidity: The Orthogonality Thesis" (You should watch it, it's pretty good, and he's been discussing AI safety for a long time) "I'm using intelligence here as a technical term in the way that it's often used in the field. You're free to have your own definition of the word, but the fact that something fails to meet your definition of intelligence does not mean that it will fail to behave in a way that most people would call intelligent. If the stamp collector outwits you, gets around everything you've put in its way and outmaneuvers you mentally, it comes up with new strategies that you would never have thought of to stop you from turning it off and stop you from preventing it from making stamps. And as a consequence, it turns the entire world into stamps in various ways you could never think of. It's totally okay for you to say that it doesn't count as intelligent if you want, but you're still dead. " I think similarly about machine consciousness, I don't know if AI can be conscious, but I don't think it has to be; It can be highly intelligent without having consciousness, so whether or not these are just math equations that aren't actually *thinking*, It could still kill us all.
youtube AI Moral Status 2025-10-31T07:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx6vZjGSGg4CrL-nnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzbtNzVpAcjVuqkKRJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzkAZDOJhmoC8Hinhh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyfiR1311E7PqIM26J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyt13y3qcMLhP5Gm6Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz48ZOMgXd_uPzTEFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyYz43cuN5TRi6_PMN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwaNpbwGEXfFOnqAXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw5Ge7eWsLI7MIRADV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwZW5NeKjUA4OAeTLR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"} ]