Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Personally, I don’t like AI art since I don’t see the actual meaning from AI, fr…
ytc_Ugx6Lmy2u…
G
Got flagged by AI security at a train station as a "Suspected terrorist" because…
ytc_Ugzq3Qho_…
G
I can disable this and 90 % of other robots in less than two seconds. This is wh…
ytc_UgxRk5ZKS…
G
The idea that there will be no more jobs isn’t even CLOSE to insane. Robotics an…
ytc_UgwDgRNiq…
G
I would try to argue with you about why AI is theft and is evil even if you spen…
ytr_Ugz0MXRCW…
G
I wanna see this concept taken a step (or few) farther and rather than just "tai…
ytr_UgxWbnuxj…
G
There are many paths to getting ChatGPT to claim consciousness. One of the easie…
ytc_UgzDHyY5q…
G
IT WILL NEVER WORK. TOO MANY VARIABLES IN DRIVING. IT WILL NEVER WORK. DON'T …
ytc_UgzQHLlGz…
Comment
> The tool is just that:
It's the same reason fingerprints are never 100%. It's a tool and often a fingerprint scan will bring up as low as 70% accuracy for a subset of hundreds to thousands of possible individuals.
Even most scanners with one person has an accuracy rating of 97% but that is with clean, on point, digital contact scanning. Fingerprints pulled from crime scenes are distorted, never perfect, and often partial or smeared.
You can only take the fingerprints and narrow down a list of possible suspects. From there, you investigate further.
Facial recognition systems with AI are the same type of tool, even Facebook which arguably has one of the best facial recognition systems still relies on other data points to guess who the face is. It mainly guesses based on account associations, associations with others in that same picture, whether it is posted from that specific user's account who they've already associated a facial likeness and profile history of that user's face and even then, with all the various complicated systems, they are only something like 98-99% accurate when guessing.
These fuckers have 1 blurry picture and somehow guess 1 individual when their tool should have guessed thousands of people, possibly tens of thousands based on a blurry picture. If their AI system guessed 1 dude, they need to pull that shit off the market and fix shit.
reddit
AI Harm Incident
1609198754.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ghc3axh","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_ghc1eb4","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_ghbvo88","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"rdc_ghc6ies","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_ghcay0o","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]