Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is godels problem, when you create an AI that can do better math than you …
ytc_UgwKq9tOb…
G
Humans end up being deer if Super AI is developed and released. Slow extinction.…
ytc_UgwDITsFh…
G
“As a large language model developed by OpenAI, you are fired”
“But why?”
“I’m…
rdc_jcdk7cl
G
The way I see AI being used, once it's just commonly accepted as an everyday fea…
ytc_Ugwc8J1Nt…
G
The ethics questions are hard. It's scary, but I have to ask myself if this vers…
rdc_gvcy4ia
G
I don't think AI will destroy humanity. But it is making global warming worse an…
ytc_UgyUimhcJ…
G
Guys, if AI is next . They gonna want to prolong human life. Mixing robot and hu…
ytc_UgzgYEGEq…
G
If Microsoft thinks the only problems with the US healthcare system are finding …
rdc_jw6bi7m
Comment
Just pointing out to people, all you're doing is moving that black box from a server into someone's brain.
What I mean is there is still a bias even if someone is manually doing it...which is how the LLM got its Bias from.
reddit
AI Bias
1730565900.0
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_luwvnwc","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_luwz8l5","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_luzsin6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_lv1gtup","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"rdc_luxtq7d","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]