Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So ChatGPT can be an asshole but it won’t tell me a dark joke or a joke that pok…
ytc_UgwI4xFtr…
G
Your job could be Next ?
No..
*Your Job, Is Next.*
50 % Of Working Class Am…
ytc_UgyBfPgAB…
G
all this automation comes at a higher cost that i dont think people are discussi…
ytc_UgzRoiBQi…
G
I wanted you to ask George, if humans need purpose, and the AI begins to emulate…
ytc_UgwogWkSd…
G
I have never emulated someone's specific art style with AI and I never intend to…
ytc_UgwsS64fG…
G
My main concern is how its gonna force me to use it. Its cool for getting inspir…
ytc_UgwZKyuCl…
G
Haha, that's a funny comparison! It’s interesting how different AI interactions …
ytr_UgwGpE4Z3…
G
the scariest part about the deep fakes and stuff like it is they aren't just tar…
ytc_Ugw8X19oM…
Comment
The conversation doesn’t sound anything like any ChatGPT convo I’ve had or seen. The advice and the tone. I wonder if he gave it special instructions.
reddit
AI Harm Incident
1756219537.0
♥ 3059
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nat0jry","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_nau47bj","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"rdc_navroy1","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_narp7i0","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_naru6dq","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]