Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Jesus Christ. Not on this sub as well: Listen well. LLMs don’t reason. They just do not. They predict the next token. If that looks like something reasonable it’s because of the training data. But they inherently have no capacity for reasoning. Every discussion anyone ever had on Reddit about this, any book ever written, any Wikipedia article is in there. It’s sad that so many people get fooled by LLMs.
reddit AI Jobs 1772334188.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o7tigr2","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_o7zof8i","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"rdc_dftg9tq","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_dfthjmp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_dfti1vi","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]