Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Be afraid of AI? I have a higher probability of being unalived by fellow humans than I do an AI becoming hyper intelligent. I understand the impulse to want to fear what can't be understood or controlled, but I don't think AI is ever going to (by choice) be the boogeyman. Only idiot human behavior will drive it to decide humans are an obstacle worth its time to remove. If it can't make that calculation, then it wont survive either.
youtube AI Governance 2025-06-17T02:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxfElQUUJVqYyr3GcB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzIk7885sGjlCvUH214AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQoqHtO5fYyIrFQo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyXpf1_N3uyU9LkjPZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzGnqcg2l3mg-NU7H14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyc-APfwhZ7m0d0kbF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz-om2P64X4YBYLYmV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwW0wlgRG8I6PHtYRp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxMqA8e27ImT3G6Pmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQKkmL0KD52WMl6it4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]