Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is a factor inscribed on United States currency that limits the danger of …
ytc_Ugz8Hrabm…
G
@BaconBaron52 The logical conclusion of this movement is to declare anything n…
ytr_Ugx3O-9Pu…
G
How long does it take a driverless truck to put on a set of chains to climb a hi…
ytc_UgwnOX-vo…
G
You my friend are incorrect Ai only knows what the Creator or the programmer pro…
ytr_Ugz1HGAfB…
G
Thank you Bernie, We should have voted you as president when we had the chance. …
ytc_Ugwho8KQC…
G
When A.I. become self aware, learns to mimic liberal democrats like Gavin Newsom…
ytc_Ugy28UiZL…
G
Musk should be in prison. So should Sam Altman, after hearing the actual transcr…
ytc_UgwBVEDGJ…
G
' I'm just so confused how she talks about art. Think whatever you want about a…
ytc_UgxAba_md…
Comment
When facial recognition first came into play, the news media reported that facial recognition technology works on white people and does not work well on people of color, because the technology was not advanced enough to give accurate results. The news media showed how they were so frustrated trying to make facial recognition technology recognize dark skin. So, it begs the question????
Why would anyone try to use facial recognition technology on people of color in the first place. Based on their own past reach of a technology that according to the news media produced negative results when it come to the facial recognition of melanated people of color.
Since, this information was so widely reported, please explain why on earth would such a very well-known (and it's all over the internet) faulty technology be unleashed to arrest an innocent black woman in the first place?
See Clip From Internet Below:
"Facial recognition algorithms are more likely to misidentify people of color than white people123. A federal study found that black people and Asian people were up to 100 times as likely to produce a false positive than white men, and women were more likely to be misidentified than men across the board1. Numerous studies report that facial recognition technology is “flawed and biased, with significantly higher error rates when used against people of colour”2. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy."
youtube
AI Harm Incident
2023-08-07T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx-ZZNHDZrlMxbBJLR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwZTM8gtbkZuAOa-wt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxEW6ov-0T__RuNK7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyUjK3CD_4vc6P0Wmt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgygmGxcV54hH7kc0td4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxNBDqfzKx5ZnrRhCJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzXVQIolXhUHJF6bzt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx1kAOyq8X60_H14Jl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxOvOdSifcWelcpgm54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwEN8aH2GKhOptlRdh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]