Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah, pretty much. This is a limit of current legislation for new technology. A LLM is there to make up stuff in a plausible way. If the user is not able to understand that, it's not that the tool is faulty, it's that it's not used properly and it should not be reliable for that.
reddit AI Responsibility 1682532434.0 ♥ 5
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jhu08g3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_jhszwlx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_jht01cu","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_jhswu3z","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"rdc_jhtccia","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]