Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
OpenAI has talked about increasing efficiency to decrease costs, so maybe this led to some kind of ockhams razor behavior that cuts short on long paths to finding a solution. But it might be that the AI is then outputting the first solution, not the most plausible one.
reddit AI Harm Incident 1689751483.0
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jsl0koz","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jslqgze","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_jsk7ztg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jskeoqb","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_jskutj8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]