Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If there is more of opinion A than opinion B it will repeat A more often. Or, if it searches a keyword and opinion A is all it finds it will never say opinion B. That's all that's happening. An LLM in 2003 would repeat the lie that Iraq had WMDs like every media outlet did.
reddit AI Moral Status 1750531581.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mz03tcc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mz11i09","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mz1j66b","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"rdc_mz40z8x","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"rdc_mzz3ehh","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]