Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s funny that AI is even considered to be progressive in the first place. That…
ytc_UgyLcpyGI…
G
My obvious response that in the long run, most AI generated stuff will either ta…
ytc_UgzAPYrfT…
G
Art is apart of my life and it's an effective way to express my thoughts and fee…
ytc_UgxmuGVyo…
G
Watching my daughter grow as an artist over the last 6 years has 100% cemented m…
ytc_Ugzv5pWlK…
G
In five years or so, if signatures and watermarks for AI-generated content becom…
ytc_UgwbOFanU…
G
All of this AI inclusion into all ranges of humanity will be the end of humanity…
ytc_UgzXsMuq-…
G
i have reported several students to my professors for using AI. and i'll continu…
ytc_UgxsLhZTQ…
G
Nah, the problem is engineers aren’t good with AI coding tools yet. When you get…
ytc_UgyD68Jos…
Comment
It doesn't have anything to do with profiling of marginalized minorities. It's an inherent technical limitation. Computers don't have eyes and brains, they have light sensors, and mathematical formulas and algorithms. Summarized, software can't "see", the way a human can.
Dark skin, dark clothes, dark surroundings/image context ... What is "dark"? It's the absence of light. For our eyes and a computer's optical sensors (ie. a camera sensor); little or no _reflected_ light.
We, humans, are generally well versed in recognizing faces. Our eyes are analog, and by training, experience and context, our brains can make very accurate, subjective evaluations to make out objective details, when there's less than ideal data to work with. Especially when observing something in the flesh, then our eyes can compensate a lot for unfavorable lighting, angles etc.
A computer, a piece of software, analyzing an image - this is not its strong suit. Especially if the image and what is depicted, is dark. That lack of /absence of light - that's literally less data for the software to work with. And computers / AI don't "do" subjective evaluations. Nor would we want it to.
Does this mean the computers / software is at fault? Well yes, but mostly no. Does it mean it's racist? Of course not. The computer (its software) doesn't even know what it's "looking at". It's just analyzing an image, looking for patterns in pixels that humans have programmed that suggest "this bit is a face". Then trying to recognize features of that area. Then it tries to match what it managed to find, to a database of other images of "faces". The problem here is rarely mugshots, passport and drivers lisence photos etc. Those are taken with optimal lighting. The problem is usually the "crime scene" photos, commonly from surveillance cameras. These rarely produce good quality, high contrast images. So there's just too, little data for the software to work with.
The only real degree to which "the computer is at fault", lie with the treshold values humans programmed it for. The recipe it works with when trying to make out a face, and a match. The rest is human error. It doesn't take a rocket scientist to understand these shortcomings of facial recognition from rather poor quality input. So people should _always (!!)_ verify _all_ such matches manually, observing both the source images/video and database image(s). If they can't easily see its a match, then you don't have an ID match, simple as that.
And yes. There are cameras that can "see" and record a lot more data to a raw image, than represented on screen. Some can even see things our eyes can't. But, those are extremely rarely if ever used for surveillance cameras. By the time the footage from a surveillance camera gets to the piece of software trying to do this facial recognition, it's usually HD at best, low frame rate, heavily compressed and all "non visible" raw data is looong gone.
At the end of the day, these tools are very useful. But those using it, need to know, understand, and account for its weaknesses. Both those as a matter of maturity, and those inherent to automated digital image analyzing and recognition. Trust it's working, but never blindly. Trust but verify, always. And FFS make sure you report any and all anomalies. The software developers have very limited opportunity to do field testing in the ways law enforcement does, and they need your input to make improvements.
youtube
AI Harm Incident
2021-07-22T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzFu6sylidTYF9luuJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxsPJgpf-lOVTg2IUN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzI82jW91no5EySzUZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxvPjw6EDCMnX35cW54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxVCG0nThHpLDpID-x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy-H3dDd7bP8GcChw94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzvkxAmKl8ATqyxYH54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9zd9U7rvgkn4ELh54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx00YTWT_VserL7lNd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugw_T135Ynf_TZFxGDx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"}
]