Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
as someone studying computer science, chatbots are internet echo chambers so wit…
ytr_UgxsA91uU…
G
To be honest that reassuring but whatever happen today can't prove that is won't…
rdc_n80seq3
G
I don’t understand why we have to have AI to the level of super intelligence acr…
ytc_Ugxut6gwM…
G
To build on what you're saying, AI doesn't know what it's doing and won't apply …
rdc_jif948h
G
The biggest variable is the common person. Everyone is using AI. School work med…
ytc_UgyIbZ5Ji…
G
Forgive the obvious question, and perhaps this appears very naive. But if things…
ytc_UgygB_dQU…
G
I AM SO DISAPPOINTED IN YOU! You are someone I imagined would be uninterested in…
ytc_UgyQTM7gR…
G
You didn't list any ai proof jobs that you found. You told us 5 mins in to think…
ytc_UgwQIDHbZ…
Comment
How is this the beginning?
I have witnessed this happening three decades ago, and i am sure that if you back another generation or two they can also give you similar examples (or even worse ones).
This isn't some new thing, being made possible by AI
reddit
AI Harm Incident
1722933361.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_lgov9u6","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_lgohf52","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_lgqq6b0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"rdc_lgnrl4n","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_lgowayz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]