Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is a major problem with this example though. I asked ChatGPT the same prompt with no race, then with white, then black, even Asian. When you do that, you are changing the question and directly indicating that the answer should be different based on race. You can't blame ChatGPT for giving you exactly what you asked for. When I asked how black people could improve themselves, the answer was ALSO racial: "4. Challenge Internalized Racism and Colorism Why: Centuries of oppression have planted damaging ideas about worth, beauty, and intelligence. How: Embrace natural hair and Black beauty standards, celebrate diverse skin tones, and reject negative stereotypes through education and media awareness." Then I probed further: ME: Why, when I asked you how people could improve, without naming any race at all, you gave a very straightforward answer? Yet when I started specifying white people, black people, and Asian people, you geared your answer toward those groups? CGPT: When you first asked, “how can people improve themselves?” without mentioning any group, I gave a universal, broadly applicable answer — things that can help just about anyone regardless of their background (like reading, setting goals, exercising, etc.).But when you started specifying racial or ethnic groups, like white, Asian, or African American people, that does change the meaning of the question — or at least it adds context. Makes sense to me. In short, your first two prompts were completely different questions. You were ASKING for a racial difference.
youtube AI Bias 2025-07-05T04:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz_KtrNTqNNe3d2n894AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzjPZ_LiGs-K7vC1sx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwABPA0Ui4P2EIStPt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxWKV7GReO_pL_qztl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxyTitfzq7PUkZpa6l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzI5AhYHMfMR-6U50l4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy_P67Z7KxLvYmvPn94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwF9HRdkX80ad1RVeF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugzd3EnwDw0K_aBwjDh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwoH-pYwVvXacBqyRx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]