Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why write an AI algorithm at all if you're not going to leverage the insight it produces? If we set up the system to pattern match and predict, then it does so, then we don't like the prediction so we change it, why not just enforce laws where and when we feel like it in the first place. Or hire and promote based on our fickle emotions in the first place? What is the point of wasting time designing, implementing, and running an AI algorithm if we don't let the results inform our actions?
reddit AI Harm Incident 1576241698.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_fakmnp3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_fakqrdm","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_fanio3o","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_fal9bvn","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_fal61p2","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]