Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The "diffusion of accountability" framing is more useful than the standard AI safety debate. When something goes wrong, there's always a plausible argument that the developer, deployer, user, or regulator is responsible - and that ambiguity is doing a lot of work right now. Holding companies to the same product liability standards as pharmaceutical or automotive industries would cut through that fast - does the drug/car do what it claims without causing disproportionate harm? Same question applies here.
reddit Viral AI Reaction 1777017453.0 ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_ohz2w3t","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"rdc_ohz4nex","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ohz7nuv","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_ohzm8iu","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_oi0757q","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"})