Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My ass. You'd need a fucking mountain of GPT data for that, unless they literally fine tuned a Llama model. It's not something you'd pick up "accidentally" in your training data in a high enough quantity to actually affect the output, the only reason it happens with fine tunes is because those are literally designed to adjust the model with small amounts of data. Grok is supposed to be a foundational model and they're talking out their asses.
reddit AI Harm Incident 1702161598.0 ♥ 9
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_kcnu0uf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_kco7sbg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_kcp05j5","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_kco6qgj","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_kcq24x0","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"})