Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean, humans have morals. *We are the only moral people*. Heck, we're the only ***things*** with the capacity for morality that we know of (don't throw BS "wut about future AI tho" at this comment plz). For example, from what I've read, it's actually been proven dogs don't feel guilty when they do something wrong... They just fear punishment. Go ahead and repress that information if you want. Maybe elephants or dolphins are moral beings? But they're also wild animals, so there's a lot of what we would consider "evil" going on there as well (pointless killing, raping/torture for fun, etc. etc.). Unlikely they have a specific set of universal principles that define what is wrong/evil and what is right/good. Philosophically, the statement "generally humans are not moral people" is wrong by definition. They might act in an evil way, but they are *the only* moral actors. Their morals might be "whatever I do for me is good, whatever other people do that is bad for me is wrong". But that's still their (perhaps poorly conceived) morals. Source: high school philosophy class (not the dolphin stuff).
reddit AI Moral Status 1711056260.0 ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kvyzubw","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"rdc_kvzpzmt","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_kw0avtu","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_kvy8xlz","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"rdc_kw5yu67","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]