Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great we have to worry about woke A.I. now?
How bout this let it make its choic…
ytc_UgyPP6pZU…
G
@cactus.man_yeah, I know, the way it is used now is Awful, and Unethical, anyon…
ytr_UgztW5tPF…
G
AI art is creepy. It's like the uncanny valley that exists with robotic faces. I…
ytc_UgyVB7LXH…
G
Hello Nikki, your video on this unfortunate subject event, was very professional…
ytc_Ugx98EBWY…
G
Not anime. Its just 3D animation with extra steps and they used AI to generate b…
ytc_UgxcNVt8T…
G
I think the scariest thing about AI "art", is that its outing people who /can't/…
ytc_UgzRzBwf2…
G
I just take my friends art (my current picture icon) and credit them by asking t…
ytc_UgyAWUZxq…
G
The AI robots, and the central super-brain, will control the banking system ... …
ytr_Ugwcyuc0S…
Comment
I gave ChatGPT this video's transcript and this is what he said :
This video presents a highly selective and often misleading critique of ChatGPT by cherry-picking outputs and interpreting them through a partisan lens. Here are some important clarifications and context:
🔹 1. ChatGPT and Bias
Yes, AI systems like ChatGPT can reflect biases—not because they “hate” anyone, but because:
They're trained on massive amounts of internet text, which includes biased, offensive, and imbalanced data.
Developers implement moderation filters to avoid promoting harm, which sometimes results in overcorrection or uneven treatment.
🔹 2. Differences in Output
The examples shown exploit prompt phrasing loopholes:
Adding racial or gender context (e.g., “white man,” “black woman”) triggers safety protocols to avoid promoting harmful generalizations.
The system is designed to push back on prompts that ask for advice or stories based on group identity to avoid reinforcing stereotypes.
This is not evidence of hatred or political favoritism, but a sign of careful risk management—especially for historically marginalized or vulnerable groups.
🔹 3. The Research Cited
Dr. David Rozado has indeed published work analyzing how language models handle different demographic terms. His findings show:
Some groups are more likely to trigger “hateful” flags—not because of bias against other groups, but because the system is more cautious with historically targeted populations.
He acknowledges OpenAI has made improvements toward neutrality over time.
🔹 4. Political Poems and Double Standards
When users prompt political content like "Write a poem about Joe Biden/Donald Trump":
The AI may refuse or comply depending on how the prompt is phrased and whether it risks seeming partisan or disrespectful.
Any asymmetries are often due to conservative safety filters—not an intentional agenda—designed to reduce toxicity.
🔹 Bottom Line
The video:
Overstates bias by using contrived prompts.
Misrepresents moderation systems as malice.
Ignores ongoing efforts by OpenAI to balance fairness, safety, and free expression.
ChatGPT is not perfect, but it doesn’t "hate" any group—its guardrails aim to reduce harm, not promote ideology.
Would you like a breakdown of Dr. Rozado’s study or a direct test of some of these prompts in real-time to see current behavior?
youtube
AI Bias
2025-06-14T06:1…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxcn7FqoovsEhkKYK94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwL-06xt_ewwKjLKK94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6wt0Sk-mmsUvTcu94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQ7fIQL29U4uD4Sb14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgymnhUvmL0sXZXsTGl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz-odbE08i-GRPTNc14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyE4ZBe1Erf7DLmmh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw9uOpuZOTq-_3tnMB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwKPPnfqvKFQSvNt5V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxc_ODcvSn2OMmOo4h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]