Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's certainly not no. Otherwise we would do things without moral consequences. But they do have time limitations and also state of mind limitations. You don't hold some accountable for something they did when they were 5 years old or if they are diagnosed with something such as schizophrenia. But if they currently have it and did something bad, they have to prove they recovered from it.
reddit AI Responsibility 1630363308.0 ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionunclear
Coded at2026-04-25T08:13:13.233606
Raw LLM Response
[ {"id":"rdc_h5tyy7y","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_ha1fl1z","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"rdc_ha1rxcp","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"rdc_hazjat5","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"unclear"}, {"id":"rdc_he280f0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]