Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's partially that, but not only that. There were at least a few shifts: \- perception as you pointed out \- 3.5-turbo model, which is way faster so it's most likely pruned/smaller (i.e., a bit less capable, but way cheaper to run) \- a lot of usage data that was fed back into it through RLHF or other methods that OpenAI uses. That data was most likely annotated by humans. The annotations include adherence to OpenAI's content policy \- public outcry on Sydney/Bing - which most likely made employees of OpenAI even more careful And probably a few more.
reddit AI Responsibility 1682540355.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jhue6u4","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"rdc_jhskgwt","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jhsi6wk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_jhtx5xe","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_jhtxvkd","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]