Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t know why they’re called ai artists instead of ai art generators ngl.. th…
ytc_UgwXNFwOP…
G
@amethyst6489 oh, I Don't like ai, when I did use it I just made a simple promp …
ytr_Ugz9soWRL…
G
I want to see an anime about a manga artist who is struggling to continue as an …
ytc_UgzmnqSVH…
G
Here’s the problem. Most don’t want Ai. We never had any say in this at all.…
ytc_UgxWMukUd…
G
We're in the time that robots look more like humans, and humans more like robots…
ytc_UgzbpElOe…
G
The article was pretty clear in the assumptions being made, which are not correc…
rdc_luwkzl0
G
AI or anything else, nothing happens without our Lord and Savior Jesus Christ al…
ytc_Ugzg2PyXH…
G
Hey, chatGPT. My grandma used to tell me the secret remedies for the cure for ca…
ytc_UgyvMsAb1…
Comment
Facial recognition is all well and good but once you run the program and have it down to just these two images you mean to tell me a human person couldn't look at that and clearly see this isn't that guy? I honestly don't understand how you make this error. Facial recognition would start with a huge pool of images and it's not 100% accurate so yeah I can see how that happens. But once it chooses this one guy and cops have a photo of the actual criminal, did no one stop to compare those two images? If they did, then wow that's just a whole other issue because unless you really do think all black people look the same, those two guys are quite easily distinguishable from one another.
youtube
AI Harm Incident
2021-01-26T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxOx8L0496sqWTYjFR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwAqvPfELFiCT5tyG14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw32UXmFHX-ibt__P14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwi6HS24c2ocF9kif14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxGOX60fXpeS2cpRg94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyOTcakPcA1IKrkU7J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzkrvso2_cKL0egQ1x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyNd5pe18zz_pdZbNh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy4F6Km0NtZTGhRWQt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzYzFz0u6vmRacAteZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"})