Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I might be completely off topic here, but as a follow up question (I am not the OP), you say: > Something that doesn't have a will is probably not a moral agent. If so, we couldn't hold it responsible. However, it may still be an 'innocent' threat, like a rock on top of a building that could fall and hit someone on the head. But, in a simplistic view, the one who placed the rock there could be held responsible. How about the creator of such a system? Assuming at some point AI reaches that level of intelligence (and public usage) which could signify some danger (from decision making in self-driving cars to terminator), should the "creators" be held responsible? And, in the same topic, if the creators should be held responsible, does their responsibility stop in case the system exhibits "will"?
reddit AI Moral Status 1487177441.0 ♥ 2
Coding Result
DimensionValue
Responsibilityuser
Reasoningmixed
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dds1kol","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_dds1ott","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"rdc_dds7hhz","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_dds1oe9","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"rdc_dds3r39","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"fear"} ]