Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This has been true of development for a really long time. Kind of...
When devel…
rdc_nbioy0o
G
Are you seriously going to click in this video everyday and spamming your passiv…
ytr_UgzG9o-P3…
G
Humans are one step closer to destroying themselves if this intelligence cannot …
ytc_UgyC8KB3g…
G
Total job loss imminent? Is AI going to flip your burger? Pave your driveway? …
ytc_Ugx8wyApN…
G
I agree but what worries me a bit is the cost, rhey made deepseek with 6 mil (ev…
rdc_m94iz7g
G
Lol. AI killing people. That's good. Maybe finally humans can come together ag…
ytc_UgxPtR93I…
G
> ChatGPT usually comes up with answers which sound true but they are actuall…
rdc_jkpuou8
G
A lot of people seem to be pointing out flaws with deep research but this is to …
rdc_mbmvwwv
Comment
I don't know I feel like the possibility that some people would probably try and use ChatGPT to help them cover up crimes or get away with murder is definitely something that would have come up while developing the product.
Imagine the lawsuits if it was out here telling people the best way to hide a body.
reddit
AI Harm Incident
1773349731.0
♥ 33
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o50nb5q","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_o51l7fw","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_oa4057u","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_oabz523","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_oa0gx99","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]