Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dont know why they are blaming the algorithm when its the people using it thats …
ytc_UgwCEcxGx…
G
Currently the chat bot sucks better use ai customer service ai for easy tasks an…
ytc_Ugz5Q543B…
G
Whether you are for or against AI generated art the key point here is that using…
ytc_UgzVhreAS…
G
AI (in reality, precisely NN) replaces only useless people. You need at least 30…
ytc_Ugx2w71kj…
G
Sorry I didn't specify, things like language jobs that have been taken by ai, an…
ytr_Ugz2rgc_J…
G
'A podcaster like me!' hahahahahaha, yes 100% a robot can mimic and even better …
ytc_UgwKr0KlE…
G
I do not see any dilemma at all and I dont have any difficulty answering the que…
ytc_UgzEykcQ7…
G
Training on copyrighted work is iffy legally and could go either way (generally …
rdc_nhy80q0
Comment
An AI company would have detect this if the same set of instructions are suddenly requested for millions of times.
reddit
AI Harm Incident
1702147382.0
♥ 91
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kcpjcal","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_kcrsqem","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_kcrga57","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"rdc_kcpbe0g","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_kcny7sn","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]