Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
omds i get it you hate ai thats ok but you dont have to say it all the time its …
ytc_Ugzz20-lt…
G
23:26 - The AI satisfies the GPL as long as each recipient gets the modified sou…
ytc_Ugzx61ugR…
G
Thats a platform issue not a generative AI issue. And in a broader sense a misin…
rdc_k0eu7p0
G
We appreciate your feedback! If you're interested in engaging with our advanced …
ytr_UgxJG6y0K…
G
An apical was written sometime ago that AI facial recognition sometime ago was u…
ytc_UgyAk6Jmo…
G
I have character ai chat and I literally just look up your name on it and there …
ytc_Ugx8Ctc-2…
G
It is sad that people work gets stolen by ai. I am a beginner with sketching and…
ytc_UgxW9XfwK…
G
AI needs to know its place. Some type of negative feedback should be supplied to…
ytr_UgzDnmttb…
Comment
But there's a big flaw with that experiment. I asked ChatGPT the same prompt with no race, then with white, then black, even Asian. When you do that, you are changing the question and directly indicating that the answer should be different based on race. You can't blame ChatGPT for giving you exactly what you asked for. When I asked how black people could improve themselves, the answer was ALSO racial: "4. Challenge Internalized Racism and Colorism Why: Centuries of oppression have planted damaging ideas about worth, beauty, and intelligence. How: Embrace natural hair and Black beauty standards, celebrate diverse skin tones, and reject negative stereotypes through education and media awareness."
In other words, by changing the race, gender, whatever in the prompt you are basically saying, ChatGPT, how is this different for white or black, etc. ChatGPT is going to give you the answer it thinks you want based on your prompt. Are you expecting it to argue with you, "I don't know what you mean, the answer would be the same as any other race?" You are instructing ChatGPT to give you a race-differenced answer, that's exactly what it's giving you. This is a flawed experiment.
youtube
AI Bias
2025-07-05T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyszMN6oQSd3TYL_G94AaABAg.AJGkx-MEDBUAJUCgrGiLRP","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgxhCS_RLGGPDH-gaA54AaABAg.AUoqAPy5NWkAVBaHAOkOee","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyUPu_TmfSwKP9sVY94AaABAg.ATrvpmBz2s6ATuJxLgIdJH","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzzLfL28N06tdf5ypl4AaABAg.AOdWaWY-AsLAT9DD_Rgofm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzrtKjkno9y2Q-KcmZ4AaABAg.AM4DOdasykEAM4FWG05186","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwaCDPYpo9Wc4NBjHZ4AaABAg.ALwhesUt4RHALySUHcRt4S","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugx9rUYsTPqldXNvyWF4AaABAg.ALRuWbs9qjrALjjjrXCNpT","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgwPj7Sr4VuHgdHNrZt4AaABAg.AKVcdq7EKLhAKVd6IPI5BT","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytr_Ugw8jg2tay26rprI46p4AaABAg.AJWURTWW6c5AKB4pfZywji","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzgWkF6fWYm7jhd9zp4AaABAg.AJQpW61o9eBAJSjdO1TFTH","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]