Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For AI I say yes. For programable (controller) robots I say no. I’m not sure if this is ignorant on the topic (cause I honestly don’t know much about the logistics of it) but couldn’t you get your robot to do something shady, then absolve yourself of any guilt because “the robot is it’s own entity and should be the only one punished?” Or is that something different?
reddit AI Moral Status 1524938574.0 ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dy5c2nm","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"rdc_dy4s4e2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"rdc_dy4gvcs","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"rdc_dy4jakb","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_dy4h89a","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"} ]