Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Congratulations, your empathy has been fooled by a tool humans made. I just hate all this nonsense about "we don't know what consciousness really is"... Yeah and so what ? It doesn't matter. Let's just let humans have their biological history, behave like they were built to behave, and let's just let machines have their mechanical history and do what they do best : serve as tools like we designed them to be. They will never be like us because they don't have our history. We are fundamentally different from each other. It's completely fine and needed to assess the security issues about these machines and how they interact with us. AI alignement is a legitimate issue. But trying to consider if what happens inside the equations of a machine is "emotion" as we understand it is just like trying to recognize a human face into the trunk's bark of a tree and concluding : yeah, they're like us. # IT DOESN'T MAKE ANY SENSE.
reddit AI Moral Status 1676637477.0 ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_j9odf19","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_j8w14lw","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_j8wcj5w","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_j8w2zxv","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_j8urj1d","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]