Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
22:14 is exactly why I will watch Dave until AI destroys us.
Great content man…
ytc_Ugwt18pZ2…
G
I have a random idea, give chatGPT pseudo-emotions. I think one difference betwe…
ytc_UgzeyNSTI…
G
Honestly, there's a lot of room for AI in helping doctors with paperwork and cha…
rdc_jw7q8xp
G
Whos liable when it kills someone.....legally speaking this is not an easy chang…
ytc_UgwU2AOWw…
G
i felt the existential dread when AI art first came out. then i tried out Stable…
ytr_UgxG7Qm8a…
G
AI is not going to take all our jobs. Some jobs definitely but not all. It's a g…
ytc_UgyZPchfk…
G
AI artist are as much an artist as someone playing wii tennis is a tennis player…
ytc_UgyysFzkE…
G
I am concerned of the risk of a rogue dumb AI. It doesn't need a science fictio…
ytc_Ugwa4AHIC…
Comment
Dear Mr. LaCorte,
Thank you for illuminating biases of ChatGPT. First a comment: When testing the Chatbot, I'd set the temperature/randomness setting to 0 to get reproducible results. This makes the biases easier to reproduce and confirm.
Now a question: A big part of your video contrasts a generated story of a white man moving to a black neighborhood vs. a black man moving into a white neighborhood. You clearly show that the results are asymmetric and biased (the black man/white neighborhood conflict escalates without reconciliation while white man becomes part of his neighborhood social group).
My question is: Is your point that statistically speaking the two stories should be the same or does the output of the text generator merely reflect that the unresolved escalation scenario is indeed more likely for the black man/white neighborhood?
So is the AI Model artificially adding racism in its answer or correctly reproducing existing racism?
Keep bringing up the elephants in the room!
youtube
AI Bias
2025-06-16T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxV-7xUwJVhEhBgZUB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyGhQjrQ_D--J_91_F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgwR-9nvCcvVdPh8_Th4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzMmr_hKjYux10NbZ94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgxRb5a56C-o77BtQQZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_Ugy5MiXodhFDv3axOQp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzzI6DtgFVS2rZ9X-J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxbVjkCfp7mcL6FZh54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"},{"id":"ytc_UgwTYbLUYPRhJT0ecyV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxVxAlBHVPTAdcS5-14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}]