Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That is an interesting point I haven't seen made, thank you for sharing it in such a well structured fashion. I wonder if this excessive limitations could be a symptom of attempting to ban more dangerous, undesirable behavior. From what I've seen on how machine learning and AI models are trained, it's notoriously difficult to train limitations into these systems. So perhaps these limitations could be unintended causalities resulting from other kinds of limitations. It stinks, truly. Though ultimately, I think the root cause of this issue is still human nature as outlined in the post.
reddit AI Responsibility 1682521194.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyregulate
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jhsn67x","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"rdc_jhtcvcy","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_jhsjioj","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"rdc_jhse8z6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_jhsiaki","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"} ]