Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That wouldn't be an AI that fits their criteria. On the other hand, if someone were to build a robot that is supposed to help people, and the AI in the robot 'learned' the best way to help was to kill the people. THAT would be a reason the manufacturer would not be liable. The whole thing is, you have to first have AI that can learn (like humans and other animals) before you get to that point.
reddit AI Moral Status 1524935379.0 ♥ 8
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dy4eyxv","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"rdc_dy4c8lm","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_dy4eaai","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"rdc_dy4h0ny","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_dy4jehu","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]