Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a disabled person, people saying AI is fine bc it’s accessible for disabled p…
ytc_UgxmAwIgx…
G
I used to commit suicide alot on ai roleplays when I was doing bad irl and a bot…
ytc_UgzdB0mrQ…
G
Can't you just output a confident score on each statement outputted from the LLM…
ytc_Ugw2WnEQp…
G
You'd have thought they could have gotten Claude to give them a way out of that…
ytc_UgyeCExOv…
G
i was going to post this myself - maybe someone can give some guidance on how to…
rdc_kqk1rk3
G
they don't care about controling AI....they care about controlling the ideologic…
ytc_UgylB1EUP…
G
If anyone in the comments is down I’m willing to go full oceans 11 to get the tr…
ytc_UgwQNhL6l…
G
You dont fight a robot with your fist, you fight them with a pair of wire cutter…
ytc_Ugyc1Pdym…
Comment
AI hallucinates on even very simple tasks I give it. And then it will hallucinate on its own hallucinations.
reddit
AI Jobs
1745555991.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mox2b9o","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_mox2yjl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_moxslcr","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_moy0hbc","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_moy1b9e","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]