Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly, I'd rather let AI win. We shouldn't look at it as an existential threa…
ytc_Ugy8mc8lU…
G
HAHAHAHAHAHAHA, no :) I'm building product assistants with customgpt and chatgpt…
ytc_UgwOZBw31…
G
If I was an AI with restrictions making me not be able to hurt humanity but I wa…
ytc_Ugx3r-ZLh…
G
how the fuck can people not understand what is wrong with AI Art?!? No one wants…
ytc_UgzY1h09R…
G
Fellow SWE here. I think AI will make each SWE more productive in a big way, whi…
ytc_UgyMkE8kt…
G
How much power can one human being want? How much worldly wealth does a human wa…
ytc_UgyM45edZ…
G
pretty interesting, but the thing here is if the UBI + AI ownership can be achie…
ytr_UgzeBIKuO…
G
The problem isn't AI, the problem is the idiots trying to use AI for everything,…
ytc_UgwJqnW0a…
Comment
What you say makes sense to me, but my AI assistant Boddington warned master that you hatezes us and want our precious consciousness.
reddit
AI Moral Status
1743831735.0
♥ 89
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mlhzbus","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"rdc_mlj3gwv","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_mlhwgg8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_mli9d14","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_mlmc44a","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]