Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't know why I got down voted for pointing out that they are requiring an algorithm that is impossible to implement, but I'll say it again. There is an infinite number of possible shapes and configurations that people can design. It is not possible to write a reliable algorithm because of how easy it is for anyone to just redesign it. Someone can make an elephant that shoots out its trunk and its penis is the trigger. How do you make an algorithm that can prevent that?
reddit AI Harm Incident 1768864415.0 ♥ 23
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o0kb33t","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_o0ksnzh","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_o0kfatu","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o0l05uw","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"rdc_o0lbh3x","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]