Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like with ChatGPT especially, the easiest solution would be for it to admit that it’s unqualified to give reliable advice surrounding subjective moral conflict, and should not be relied upon for that sort of thing. It does something a little similar when a user expresses ideations of suicide for example, encouraging them to seek support from real people, often providing sources. There are just some things that ChatGPT isn’t equipped to handle… and a lot of it can be attributed to its seeming lack of perspective.
reddit AI Harm Incident 1772774118.0 ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o8sndk2","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"rdc_o8sqyi6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o8sr9fz","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_o8tbz00","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"rdc_o8wyzmp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]