Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think the GTA paramedic AI is slightly above the organic intelligence of ameri…
rdc_cnhbvm6
G
Chat gpt has told people to kill themselves.
Amazons ai locked a dude out of h…
ytc_UgyTJpgJ9…
G
All an AI needs to know about how to treat humans it can learn from how humans t…
ytc_UgzspF-bi…
G
Wow.... People are... Sad.. i guess none of you want to catch thag murder or cat…
ytc_Ugy254nVJ…
G
Automation is great. The question is how to distribute the wealth. Working fir o…
ytc_Ugx5sGf-2…
G
Wonder if Trump advisors can place a usage tariff so as to protect the ignorant …
ytc_UgzouCHYt…
G
I’m not going to lie. This is genuinely scary. Most Americans struggle to find m…
ytc_Ugx1jPdnx…
G
Try to pull their noses of, if it fails then they are real, and if it is flimsy …
ytc_Ugy2h8xgq…
Comment
What I have noticed is that AI doesn’t take the right decision but rather its answers are based on kind of what I’d like to hear in the first place. I don’t trust it that much. AI has no gut feeling.
reddit
AI Governance
1757793757.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ne19moj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_ne1bwq1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_ne1s36a","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_ne1t4dy","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"rdc_ne27eoi","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]