Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What a nonsense. There is no danger.
What's the plan here? To lure Chat GPT into…
ytc_UgyEftL-Z…
G
Human art is just that, human, and AI just isn't comparable. I look at actual Gh…
ytc_UgxYjPzXL…
G
On the difference between a person "copying" yours or anyone's art style and an …
ytc_Ugydco_-L…
G
My child who is IT warned us about this. It’s no joke. IBM has remove 8,000 HR j…
ytc_UgylRzEMP…
G
It's bad objectively to the artist who knows what rules and principles true art …
ytr_UgzqjFPXI…
G
@someonesurviving I was trying to find seething AI artists in this comment secti…
ytr_UgwKpJ-yB…
G
According to the article, they aren’t getting replaced by AI. He’s trying to imi…
rdc_m83s4qc
G
I've had chimerism misclassified as chronic bromism half a dozen times because o…
ytc_Ugzh05lfJ…
Comment
Exactly this. If you want something from ChatGPT, you have to ask for it. It's not gonna read your mind, and implied context is meaningless.
reddit
AI Harm Incident
1751215935.0
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n0fb08l","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"rdc_n0fm17d","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_n0f2t6i","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_n0fn4yj","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_n0f7e71","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]