Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In all transparency, it's in our own best interest to have AI. Humanity is flawed on many levels. Ranging from out various isms (racism, fanaticism, nepotism, etc.) bundled with emotional impulse and social mores. As well as our ability to not perform consecutive repeatable actions, this could be a massive benefit of having AI in place. The problem, in my perspective, is who is training the AI, and will their imperfections be incorporated? Whether are we forward-thinking enough to remove politics and inefficiency from the way to manage and monitor the coming trend?
reddit AI Moral Status 1674143209.0 ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyregulate
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_j4y8mbi","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_j4zijki","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_j4ziw8f","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"rdc_j50k5uo","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"rdc_j50y73q","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"} ]