Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans cannot be controlled safely. Even now human society sits on the edge of annihilation at our own hands. We believe that the avoidance of that annihilation thus far is proof of our control over our worst impulses. Can that really be true while we still sit poised to destroy ourselves either instantly or through our continued degradation of Earth's biosphere? This is the knife's edge that we balance our entire civilization on. As shit a state of affairs this is, some people are still intent on pointing out AI as the big boogeyman waiting around the corner to end it all. What else can we possibly have to fear from AI that we shouldn't already fear from other humans? Perhaps it isn't fear but rather the shame of being judged and found lacking by an intelligence of our own creation.
reddit AI Governance 1708156884.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kqssfwf","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_kqt3n7h","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"rdc_kqt50di","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"rdc_kqt54xe","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_kqt68sb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]