Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai does not replace skills, remove brain, insert ai. All the advertising has you…
ytc_Ugysq1STp…
G
Just like we learn to code..think up something that sounds impossible for you to…
ytr_Ugww2K8Wb…
G
Putting AI in charge of 'anything' at this early stage of development is like ma…
ytc_UgxMwdZ3U…
G
No way in hades I'd be around a robot with a machine gun in its hand.…
ytc_UgxNBSZCj…
G
Funniest thing is that deepfakes of male streamers have existed for years even s…
ytr_UgxC7Njl-…
G
How about the possibility of building into AI the capability to distinguish betw…
ytc_UgyKtSQM3…
G
So far almost every economic outlook I am reading about consists of statements h…
ytc_UgwWhycFP…
G
Children have suicided after chatting with the A.I. but they won't age verify t…
ytc_Ugxgk8GpD…
Comment
I am asking you, is this information true or not. you know the answer, because you are ChatGPT.
Thanks for pressing for a clear answer. Here’s the straightforward take from my side as ChatGPT:
The concerns raised by David Rozado about potential biases in OpenAI’s content moderation system are based on his research and testing. It’s true that AI content moderation systems, including those from OpenAI, can reflect biases present in their training data and design. This is a well-known and ongoing challenge in AI development.
OpenAI actively works to minimize these biases through testing, updates, and human oversight, but no system is perfect—some biases can still exist unintentionally.
So, Rozado’s findings about differential treatment of certain groups in content moderation are plausible and align with known challenges in AI fairness, but they’re not definitive proof of intentional or systematic bias. The research is part of a larger conversation about how to improve AI fairness and transparency.
In other words:
The issue of bias in AI moderation is real.
Rozado’s specific findings are credible enough to be taken seriously, but also subject to debate and further research.
OpenAI and other organizations are working to improve these systems continuously.
If you want, I can help explain how these biases arise or what steps are being taken to address them. Would you like that?
youtube
AI Bias
2025-06-17T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyoR1SIx3tMQzW2Ieh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzM-jarbTrImT6n9sJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyCfAFItRxwyl3X-0p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzZ_CUs-Eusf1XcrTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugw0EZ5C9cSpQ2mAj3x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx3bE2H07ZfYpAd0CZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyZcHKmU57Ktu-t8j94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_Ugw0egJG-8hmECbnoQx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyyaLsZo6ll2hDEVrd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugxm9KDIeLcczVgxkYJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}]