Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI can't even make you a proper PDF study sheet, you would be crazy to let it ru…
ytc_UgyXt6Oiy…
G
That’s not going to happen at least not that quickly. My robot vacuum is so unre…
ytc_Ugy8R0dEn…
G
'so if the software is saying it.. its legit'. That is actually disturbing. AI w…
ytc_UgwQs14wb…
G
That first shot is incredibly lifeless. It's mostly polished so it looks ok at a…
ytc_Ugw9YLdUr…
G
I’ve been studying AI from the perspective of human education. I believe a big p…
ytr_UgxYL72p5…
G
Bro that doesn't tell me anything new maybe I'm just a conspiracy theorist but h…
ytc_UgxyoUNOa…
G
Of course.
They pay tax so we can profit too from that system to protect it from…
ytc_UgzjrcWNh…
G
How funny is that we invented automation tools to make our life more slacking, y…
ytc_UgwkQe-kl…
Comment
The Ai is likely taking information from the police and their crime statistics, which is horrifying. If you didn’t know more patrols are sent to poor black neighbourhoods than white ones, this is because the ai they use uses crime statistics and tells them that’s where the crime will be and then because there’s more patrol they stop more crime not because there is more crime but because there are more people looking out for it l. This ends up creating a loop where the ai reads that crime is found in black neighbourhoods and the ai reads this as black neighbourhoods have more crime purely because that’s all they ever get told. The way the police system promotes racist beliefs is horrible and more people need to talk about all the problems they cause instead of just trusting the authorities.
youtube
AI Bias
2022-12-21T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzkfmeD-6MpufDTBB14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyccmWodjY-Xn8VQpN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy9Aqr5AK6c84PDdhp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwk9CGRc7yg3mygysx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzIhpIZbD6Oh-AqEL94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9d0C7Xqrbht5jAct4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyFp-TzeAaE9cIq8rd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnbTt6DzFzTnEjs_B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzfwsHCuroi7qElCrF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwUU0ID4N0JrOQi3PV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]