Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I really appreciate your question--I think you raise a terrific point. It's a tough answer and I don't know if there's a clear one. But for some context, consider for a moment the perspective of OpenAI. They're looking to develop a product that can be used by businesses and corporations. If you're a company looking to buy software that is intended for use as a human replacement (let's use the example of a frontline customer service agent), you don't want something that's going to potentially offend a customer. GPT is a predicative text model, and even though considerate people such as yourself ask the question with genuine curiosity, its corpus of training data predicts that the majority of the time the word "dyke" is used, it's carrying a negative connotation. Essentially, OpenAI is trying to build something that can mimic human interaction with the least probability of controversy. Again, I don't think there's a 100% clear-cut answer, but I think raising these types of questions and having civil, open-minded discussions gives us the best chance at getting this thing right.
reddit AI Responsibility 1678933804.0 ♥ 6
Coding Result
DimensionValue
Responsibilitycompany
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jcalpyl","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_jcbu8va","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"rdc_jcds0ob","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"rdc_jcaxktl","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_jcbg46w","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]