Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Okay…? Well, this isn’t stopping me from making AI agents responsibly. When I heard about this, I decided that it comes down to me personally. I have to put in the safeguards in the instruction blocks that protect users because that’s the responsible thing to do.
youtube AI Governance 2025-07-01T11:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwC9zb4GR3A3yJyj314AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz2feqmB1Kl8lpKP_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwX7pLX9NPswdUBbOd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzpKOMnmCwirdcJeXt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyx3s9StbB-kjtXIGh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz1kiLdkTnmsUqq3Mh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx8_2MCfGMebWMqRG14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxPQWCiQXoydpOro4V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwJoDiuzEx1IdepfUh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyI8G4dOluPIT4XSPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]