Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Submission statement: if AI corporations knowingly release an AI model that can cause mass casualties and then it is used to cause mass casualties, should they be held accountable for that? Is AI like any other technology or is it different and should be held to different standards? Should AI be treated like Google docs or should it be treated like biological laboratories or nuclear facilities? Biological laboratories can be used to create cures for diseases but it can also be used to create diseases, and so we have special safety standards for laboratories. But Google docs can also be used to facilitate creating a biological weapon. However, it would seem insane to not have special safety standards for biological laboratories and it does not feel the same for Google docs. Why?
reddit AI Responsibility 1724486695.0 ♥ 8
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ljobpsh","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"unclear"}, {"id":"rdc_ljtknnw","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_ljpw00e","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"rdc_ljodj9v","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_ljqpp2i","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]