Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Kind of a reductive comment you have made. Those doctors didn't go to Liberia to…
rdc_cjofh7e
G
al voice is old school , this happened like 10 years ago now they have ai video …
ytc_UgyRGxe9O…
G
So if I understand correctly, the heads of state and industry are willing to sac…
ytc_UgwuYirf_…
G
>post saying that people who say ai art has no soul are wrong because of this vi…
ytc_UgxK1egvt…
G
Here is the full source code: [https://github.com/cstefanache/llmct](https://git…
rdc_ohyquph
G
When ai starts creating and inventing without the need for human intervention, h…
ytc_UgybYCLiU…
G
Yup, exactly, same experience here. Any LLM solution I’ve seen - whether designi…
rdc_n9h9ui0
G
Deep fakes are not a real legal problem. The law is not and should not protect y…
ytc_Ugy7afF2l…
Comment
Go to AI Dungeon then, that's what it's often used for, with a spicy "SAFE MODE" button that can be toggled off.
reddit
AI Harm Incident
1681469498.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_jg8n7zh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_jg7icy8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_jg7o13d","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_jg7c9dz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_jg7cl29","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"})