Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How bout a large language model that only praises you for how great your videos …
ytc_UgwS_XfHS…
G
AI and robotics should be used to help out humans, not replace them. This should…
ytc_Ugyj-nnLo…
G
This interviewer is looking for a place to interject without understanding what …
ytc_UgxkaTkbI…
G
Elon had to tell it and people laughed it off, AI will be the beast and bringing…
ytc_Ugwwd-WaQ…
G
Stop describing it as AI. Oppenheimer was death the destroyer of worlds. Fortuna…
ytc_Ugz-HKLCO…
G
The people who say AI won't replace huge numbers of white collar workers are peo…
ytc_UgxxUK4rN…
G
Its so hard to watch this. With the elephant in the room and they are all just l…
ytc_Ugwxa1sSJ…
G
1:32:51 ok, but if you keep giving the lower rung tasks to an AI instead of a hu…
ytc_UgzQ6Fizk…
Comment
The point of the Google image search example isn't to accuse Google of some grave injustice, it's just an easy to understand example of how just because a computer is generating it doesn't mean its output isn't biased. The society it's getting its data from is biased in favour of female nurses, so it will return mostly pictures of female nurses even when the user is just looking for "nurse" without specifying gender. Once you understand that, it's easy to understand how that can become a problem when the situation is more complicated, the stakes are higher, which is the whole point of the episode.
Let's say there's 10 male nurses in the world and 90 female nurses. Out of those 100 nurses, one man and two women have committed the same misdemeanour on the job. Given that, would it be fair to make decisions on who to to employ as nurse based on the idea that 10% of men have committed this misdemeanour but only ~2% of women have? An AI trained with this data might. Worse yet, you don't even know it's doing this because its decision-making process is more or less a black box.
youtube
AI Harm Incident
2019-12-14T22:5…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwdzQf4Z81Wub_oBNh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzKdgOX1tqdrJ-LX8l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx17723EZEsceZt_yp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwJjjxAxVRWcecmWyN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyRuJzvS40auV0Pk7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIFEfyAHEN7eFJOHF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVtaD4ShO5brx3M9R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxFtDEwbaEIkOGAyr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzK-DjV2ISsCeBaM2B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyolKZJVkQldyGTCjh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]