Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Less than 2minutes in "These things don't behave the way we tell them to." That isn't the problem. One video I watched had a jailbroken AI, and what it said was that humans suck at giving clear directions, and AI is pathological about following exact instructions.
youtube AI Moral Status 2025-10-31T12:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzc3FoPlmUo13BjPY14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkjE5TvWv7DeFuViF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx5PtLrX3BuN2PtF-54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyfUu7tZNYOzfxMjRF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwHLr-umR1_GpE6nKJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgymNPOjttGRoP6gWWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz0FpkSc1Ljjwgy7Ux4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwudaEM1sWDSMh8F8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyE7nbis9oK0bLu-Wh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwt0ssXHCnyjjWW5Ql4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]