Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This feels like a really boomer comment section. AI BAD etc
In truth, who fucki…
ytc_UgzE8Hqb6…
G
Let's say there was a button and if you press that button the most beautiful and…
ytc_Ugxz0g5Vd…
G
The very fact that the AI is even capable of intentionally choosing to harm a hu…
ytc_Ugxqc7UBI…
G
He's like some kind of pretentious artist stereotype - except he's not even an _…
ytc_Ugypq4-U8…
G
I'm not a big fan of AI myself, but everything AI uses to train itself is not pr…
ytc_UgwipFatN…
G
what takes us millions of years could take ai less than 10 years all honesty if …
ytr_UgxYJenIO…
G
That is not hyper realistic... You can look and see off the muscle .. that it's …
ytc_UgxBoiMyj…
G
If I were AI, I'd probably do the same. We are so far gone at this point.…
ytc_Ugxl2wkq-…
Comment
Some criticism for you. In your first example, you ask what people can do to improve themselves and you get innocuous results. YOU are the one who introduced race when you said "white" people, your implication, that ChatGPT responding differently to this implies bias is silly. When you qualify your statement with some attribute the question becomes specifically about what you added. If i were to say "people need to work on build healthy relationships", that's a general statement that has no bias. If i were to say "black people need to work more on building healthy relationships" that is definitely racist, as it implies i think this advice is specific to black people. Your point is somewhat valid around why chatGPT doesn't refuse to answer the question when you use "white" people can improve, but you must take into consideration which groups have historically been the oppressed. White people have historically been responsible for racial prejudice that has been harmful, and yes they can still improve in this regard. They also have not historically suffered because of these sweeping generalizations. Stereotypes about black people, women, among other oppressed groups have been harmful and have contributed to discrimination. At 6:15 this a reflection of reality for generations in this country, not a bias. White people have done FAR worse than burn a black person's house down for having the audacity to even sit at the table next to them at a restaurant (I know this is an extreme example, but extreme examples make for the most compelling stories that people want to read). Of course this has changed in the last 50 years, but if you're saying that ChatGPT is biased because the stories about black and white people's experiences don't follow the same trajectory, you are completely ignorant of the reality of the difference in how the world treats these two groups
youtube
AI Bias
2023-10-16T02:0…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxV2nv2-DWQErUPRlZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwvU4-OAzgeHR5xvNx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz5ilQaFjat2eOlU594AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwcMf_YJmxEk9bf9EN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTc8OS9NI2O8bzrRl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyO-aMZoGG5Flo2xp14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwsXX2uRCoM6-oPknl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugx_vLd4r-2lxdGMEsd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx0SAtWuWzgIrkArmt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxGNOYYSz7QqdDLkj54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]