Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Can confirm, got a local AI model working more or less fine on my 3080 in about an hour, although it crashes periodically. Took longer for me to understand how LoRas work than it did to get output.
reddit AI Harm Incident 1708896727.0 ♥ 9
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ks3pd63","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"rdc_ks2ekib","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"}, {"id":"rdc_ks394gb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_ks4j0m7","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_ks2xwl8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]