Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In short, LLMs are bullshit machines. Their method for generating bullshit can, in some cases, generate useful and even correct information. But it generates misinformation frequently enough that it's useless unless you're using it for things that are hard to generate but easy to verify. That's a surprisingly small problem set.
reddit AI Responsibility 1755014733.0 ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n7yxgpm","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_n7yp39x","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_n86u1hz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_n7z5f3z","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_n8b58to","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]