Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't know I feel like the possibility that some people would probably try and use ChatGPT to help them cover up crimes or get away with murder is definitely something that would have come up while developing the product. Imagine the lawsuits if it was out here telling people the best way to hide a body.
reddit AI Harm Incident 1773349731.0 ♥ 33
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o50nb5q","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"rdc_o51l7fw","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_oa4057u","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_oabz523","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_oa0gx99","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]