Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The answer is that the vehicle should reduce as much harm as possible. So if you…
ytc_UgjN2KgJT…
G
You're one of the few youtubers with a levelheaded outlook on AI and not trying …
ytc_UgyWgwaR3…
G
im in favor of jobs being automated but then its time to evolve into a fair econ…
ytc_UgwyPYlwj…
G
Seriously, how weak minded do you have to be to say “ChatGPT scares me” lmao. Ge…
ytc_UgxS-kOjb…
G
YES like f AI. It's ruining our planet, destroying our creativity, and ruining o…
ytr_UgxV9Tbl6…
G
I've seen my university peers get academically screwed over because they just ta…
ytc_Ugz1AQY15…
G
Would you feel safer being a passenger in a Tesla FSD and Waymo or as a passeng…
ytc_UgyRfBDFG…
G
You are Avery God lecturer. I like it. When are you yet to set another session? …
ytc_Ugwhm5xI5…
Comment
This reminds me of that comedy show "Better off Ted". They installed a new facial recognition system to run all the systems in the building, from opening all doors to the water fountain. What they found was the software couldn't see black people. They basically thought it would be too expensive to replace the software so they figured the black employees could live with the consequences....essentially they had to have a black restroom, black water fountain, and were required to have a white "buddy" follow them to get around the building. Again, the business thought it would just be too expensive to roll it back....just like these lazy police officers. If the software makes mistakes and the police are biased, why would anyone think it's a good idea to allow this?
youtube
AI Harm Incident
2021-05-03T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzOkfwr_TrUzcN3REF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxuOTt5eSSKplzbVbl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw35LBpLIDQ3QurZz14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz_EzfxN7NrWP86dHh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzj6AYBd9wgjG_EhVN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyeNQEOK-9yVHug0LV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_-g_HRWNNe8z8i2p4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyGvxFhi-ImR-GMC3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyg8ppJ9-i6vhBa1jR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyKw0VUBkiIbKRBOdV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}
]