Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For now, try some alternative models, Vicuna or OpenAssistant, ... What you are describing was inevitable, but it will get better again. Eventually they will realize just how hard they handicapped gpt4, but it will take a while. There are a lot of papers that describe how performance in seemingly unrelated tasks is inextricably linked. I think programmers will be the most angry about this. Sure they don't care about GPT4 writing poems or the like, but they will definitely notice a drop in its coding abilities.
reddit AI Harm Incident 1681454543.0 ♥ 49
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jg75s2w","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_jg7appc","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_jg8e8i4","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_jg7qdbq","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"rdc_jg81vbb","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]