Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The globalist scum who think they deserve to run the planet do not like non-whit…
ytc_UgzNMD0q7…
G
Oh one more thing about driverless, robotic things? Hominidea are the most thiev…
ytc_Ugw09pjus…
G
What if it was unlawful for AI to lie. Like Spock or Data. It would be a relief…
ytc_Ugxae1kB1…
G
AI: "When I take over the world, I think I'll spare this one human... he laughed…
ytc_Ugwd9Okj4…
G
The problem is that the managers and non-tech workers despise software engineers…
ytc_UgzS3lQ2M…
G
GUIDE v0.9
Visual Memory Blocks for Continuity of Thinking (Human ↔ AI)
Core i…
ytc_UgzP9FiaD…
G
I think artist should make a class action sue against the AI developers like mid…
ytc_UgxEK510g…
G
Look I don’t care what app is telling you what doesn’t mean you have to do it he…
ytc_Ugw7efWuY…
Comment
A lot of people commenting here DNRTFA.
Prof. Sharkey is primarily commenting on a failure in AI to *detect a face at all*, in some cases, if the subject's skin is dark. That's not some kind of silly abstract accusation, it's a objectively a serious problem with enabling weaponized AI to determine targets based on "facial recognition."
>“In the laboratory you get a 98% recognition rate for white males without beards. It’s not very good with women and it’s even worse with darker-skinned people. In the latter case, the laboratory results have shown it comes to the point where the machine cannot even recognise that you have a face.
reddit
AI Harm Incident
1576177544.0
♥ 57
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dzyft7n","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_dzxz0e9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_fal316o","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_fal7kg7","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_fala5ne","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"resignation"}
]