Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I walk 2 hours everyday here in Phoenix. We have many Waymo driverless cars and …
ytc_UgwO1MtM9…
G
Personally idc if u use ai to generate images, my problem comes when u generate …
ytc_UgxRA2sSU…
G
AI can solve even the toughest maths. So before you call someone a joker, look a…
ytr_Ugw5EEaeH…
G
I'm glad I dropped out of school at 13 and started working, eventually becoming …
ytc_Ugxmp82jS…
G
Something that can be used for good within medicine, teaching, etc. can’t be tru…
ytc_UgxjpPgtn…
G
Are they local or still over the Internet though? I'm having an issue with runni…
ytc_UgxJE9A2y…
G
And then the rat are like "the AI tried to train me to eat humans but what we re…
ytr_Ugz957vNq…
G
What is the problem with surveillance capitalism? I am notified about products a…
ytc_UgylB39J5…
Comment
Facial-recognition is supposed to be a tool to be used as a _first-pass_ to simplify the notification of issues for law-enforcement, it's not meant to be used as the be-all, end-all, otherwise we wouldn't have cops, we'd have computers and robots doing law-enforcement. Once cops get a notice about something, they're supposed to manually check it. As the video said, it must NOT be used as blind evidence, it can only be used to _facilitate_ ACTUAL POLICE WORK AND INVESTIGATION. ¬_¬ That said, automation tools aren't always good; ALPRs is a system whereby various cameras throughout the city (on cop cars, on red-lights, on buildings, etc.) automatically and indiscriminately scan _every_ license plate they see and automatically check for any "problems" to report to the nearest cop to run them down. The problem with this is that due to how the system and criminals work, it will almost always end up screwing over people with minor infractions like unpaid parking tickets rather than actual criminals like traffickers. 😒
youtube
AI Harm Incident
2021-04-29T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyoQg5TcionW1_G8uh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyQebD9NzVU-T7zHft4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyrcurS1z6eNhKl9zt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz_8uUXE0ns5uDNAnR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxZ2gIpq5WgwXm8Ijp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzgJMcZtI0PN5TLS2t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxA6HokWzLEneS59LZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw9DmWwWGW9viyucTx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxpgtObXUEG1BdW1Z94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyGlYgqv5zuPdClZGZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]