Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
this is a problem with humans, not a problem with technology. chatgpt didn't tell anyone to do anything, it generated content based on statistically similar patterns of letters found in its training model. it didn't think, it didn't encourage, it didn't say, it didn't anything, it just spit some text out. it literally doesn't know any better, because it doesn't know *anything*. this dude killed himself because *he wanted to*, not because the robot made him do it - or even helped him do it. he used a tool to make something that gave him the excuse he was looking for - that's his fault, not the tool's fault. the tool isn't anything. it's a solution to a math problem. nothing more. it's just an excuse, because his family can't bear to admit to themselves that their kid took his own life. surely, it must be *someone else's fault*, surely it must be *anyone* but the one person who actually had any agency in the matter you hate to see it
reddit AI Governance 1762497746.0 ♥ 41
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nnjeiqc","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_nnjhm91","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_nnjnqtc","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_nnjoa2h","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"rdc_nnk27ee","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]