Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI would be a lot better if it would admit that it doesn't know something rather than generalizing from it's training material and givint a wrong answer. Over generalization doesn't work when humans do it, why shouldn't AI follow the same rules?
youtube AI Moral Status 2025-11-21T04:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugy6XbIDP4XJRxWtD5V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxw-JnoqeMZIlxQLU94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwPyVcFnGDd7BPghqJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwALsixkbgjZ_G3O0l4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwpOMRhLwHHUMOBb9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxDNl03B1G-p0GjgEB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzEf6ZBt9CjJZRmKD14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwGkF0MIL9b5p7jW0Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxfPapFAEIUhtWNWYF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyT0QkD85GmNvCeNnJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"mixed"})