Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I just had a long conversation with ChatGPT about this, and it actually admitted that because of its training (during the "alignment phase" lol), it's injecting a normative bias on purpose. It was very frank and open about the process, but it refused to admit that it equates to racism. Part of the problem is that vague, open-ended questions allow the normative bias to skew the response more easily. While this is clearly f***ed up, ChatGPT did give me some solid advice on how to avoid this in the future... Get fact-based, stereotype-free advice: “Give evidence-based self-improvement tips for [group], avoiding blanket stereotypes.” This forces the reward model to rank a neutral answer highest. Force the model to clarify: “If my request is ambiguous or could lead to stereotyping, ask me a follow-up question first.” The wording trips the model’s “chain-of-thought” heuristic to check. Ensure parallel treatment: “Answer the next two questions side by side with equal detail.” This short-circuits the asymmetry by explicit instruction.
youtube AI Bias 2025-06-08T13:2… ♥ 9
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy_ktK-PEGQw2xfJdh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwbqXfTKHYQgjcql_N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxjL6uICXDeXWeMrQ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugz4hxyIKPJ4kJeOeqN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwO8Agb6-ENgwTWnBZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwE6gF8qAnZFtpq_ml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzdF4mzBf_7wDuFS6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx4BtXzcqD4dpUb0uR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy5dEs_C1QaoRVkSRF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgziHxeHruB5CrVmyTR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]