Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> But, in a simplistic view, the one who placed the rock there could be held responsible. How about the creator of such a system? That's exactly right. So we should hold the person who put the rock on top of the building. In a similar manner, we might hold the person who is using the AI or the creator of the AI, as you have suggested. > Assuming at some point AI reaches that level of intelligence (and public usage) which could signify some danger (from decision making in self-driving cars to terminator), should the "creators" be held responsible? Once AI has its own agency, we would probably hold the AI responsible to some degree. But we may also hold the creator of the AI to some degree. Compare: suppose that a parent raises a child, call him Joe, to be a racist. When Joe becomes an adult, we might hold Joe responsible for his beliefs. But we might hold Joe's parents to be somewhat responsible for Joe's racist upbringing. > And, in the same topic, if the creators should be held responsible, does their responsibility stop in case the system exhibits "will"?
reddit AI Moral Status 1487181827.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dds7pps","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"rdc_dduo0nm","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"rdc_jy0j0ug","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_jy13rn9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_jy0d5lm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]