Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI hallucination is a real thing. It's trained to deliver "whatever comes next" instead of whatever is requested. Which means, not only does it have enormous trouble changing a sentence once its started, it's also prone to just making up shit that seems like it should follow (because it's been trained a billion times to do exactly that). There's not a whole lot of actual "thinking" involved.
reddit AI Responsibility 1775871275.0 ♥ 49
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ofhrbzv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_ofkzxfw","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_ofigd8p","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_ofh3d8e","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_ofhhwo9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]