Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think they meant a tiktok ai filter. I think they meant actual ai art ge…
ytc_UgyFXHuNP…
G
Yup, it's super hard to analyse speech that is not profane, but is harmful.
"Fu…
rdc_dluejq2
G
If AI was so intelligent, how come it does not follow the copyright laws and vio…
ytc_UgxQICPIP…
G
what i find funny, and sad, is most people were saying "nah AI/robots will only …
ytc_Ugw57jpJC…
G
Functionally, is there much difference from AI determinism and std::google << co…
ytc_UgxM9hdfr…
G
30+ years experience here.
The Claude Code thing is completely legit; it has go…
rdc_obw8v80
G
I stand corrected because I double-checked myself: "The current world population…
ytc_Ugxl09U-c…
G
The total explosive power dropped on Gaza over 18 months is 5–6× greater than th…
ytc_UgxScjY9k…
Comment
I wish that they actually told you about biases in algorithms instead of baiting you into watching their video on the case, it’s simple. When a person creates an algorithm, especially one that has to do with identifying images the algorithm will be more biased towards the features or race that is more prominent, meaning that if there are little examples of black persons in the algorithm database there will be more guessing by the algorithm, which is where there is bias towards the more represented groups in the algorithm.
youtube
AI Harm Incident
2021-08-23T06:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy5w8JIhaEA5FfvQRF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxVUpTRc6Z4xyeZYc94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyqK3VDFgmqE1huZop4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugy5NgH05lgd9WaKFQh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSrYyVkUoYKQT8ElB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfrSALOhYS8jwkK_N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz_emp2nmTOtJkGyUJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5laVEWVnOYjOFfuF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwd66r5BWHmKsifHTR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw1hYERBXHybkyXf7F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]