Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find it very ironic how people say how dangerous language model "hallucinating" is. Same as a lot of humans, instead of saying "I don't know" when it doesn't know, it feels like it's required to give it's own shit take. It's simply better communicating it's shit take. Of course it's not the AI's fault in this case. And in the event it actually does tell you "As an AI language model..." and refuses to answer, people probe it to do so still.
reddit AI Responsibility 1682522647.0 ♥ 9
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jhsn67x","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"rdc_jhtcvcy","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_jhsjioj","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"rdc_jhse8z6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_jhsiaki","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"} ]