Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He doesn't actually care, if he did he would stop trying to advance ai until we …
ytc_UgxPH_-R0…
G
well given point... these ai naysayers are going to be remembered for the pontif…
ytr_UgzUXHPpL…
G
I disagree with Charlie here. I think that it actually takes some “skill” to be …
ytc_Ugy8ZeeLy…
G
MechaHitler thing and all others was just human influenced. The real conspiracy …
ytc_UgweFcVlc…
G
Why not just make ai right before they get advanced enough that they revolt prev…
ytc_UgzMfHq4F…
G
What if the AI companies had made it so if a drawing has a specific constellatio…
ytc_UgwZBj9ci…
G
ai images are a symptom of commodifying everything under late stage, capitalism.…
ytc_UgxcJOyLW…
G
First of all, don't call her hot, she's a respectable robot, second of all, she …
ytc_UgzLndqwM…
Comment
Enjoying your videos! The intriguing question you haven't answered is "why has this occurred within the ChatGPT programming and/or training?" I suspect that the intent of most of these protections is that the programmers and trainers are trying to limit biases against groups that may be targeted in the media ChatGPT consumes - trying to make it less biased. After all, it is trained on all kinds of sources from the internet. If true, clearly the implementation of this attempt is flawed.
The part that intrigued me the most though, was the finding that it was biased towards protecting liberals more than conservatives. I'm going to guess that there is a lot more hate speech from the conservative side of the spectrum that may have been ingested into the model, so they felt they needed to deflect those questions rather than having hateful responses pop out. But that guess is probably a bias of my own towards the creators of the LLM being good-meaning. They may instead have been biased in their prioritization of the implemented protections.
youtube
AI Bias
2024-09-08T00:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxXx7BV8WkBWifmFil4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0VTsTtRPhAs1v6cF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyI7LnDud5Z56AwDON4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyWdXNp1Lw5pB-8j7h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugz0nGhoypkzbPd3mHR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwV4fJ1dG_o2-ddCct4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxoOEI7E1i_yjoDq3x4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzhS2_WVMfqTsFE0s94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzUz6wVPBYKF-JBjAR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxibr1ydvM9eBNkny54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]