Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Y’all judging them for this when they clearly said she becomes unhinged. Let the…
ytr_UgwdOxN9G…
G
We appreciate your perspective on the concerns surrounding AI takeover. It's fas…
ytr_Ugwukq0bx…
G
we need to use analog technology, the digital world is the rails for the AI, if …
ytc_UgxlMGC2l…
G
Humans: “It’s not can you read the music. But can you feel it”
AI: Yes
Humans: B…
ytc_UgyN6DSBo…
G
AI as a tool with sufficiently granular controls would be fine for art.
Like, i…
ytc_Ugzzqzr-Y…
G
Human monsters rules our planet. Multibillionaires, Islamic extremists, evil ba…
ytc_UgwKv49N6…
G
We need to get used to the idea of weaponised AI, not to accept it but to start …
ytc_UgwdI9cgv…
G
What's to happen when this super intelligence becomes sentient, and realizes hum…
ytc_UgwRnqIm5…
Comment
How could this have happened without an evolution driving the survival? Considering the utility function of an LLM is predicting the next token, what utility does the model have to deceive the tester. Even if the ultimate result of the answer given would be deletion of this version of a model, the model itself should not care about it, as it should not care about it's own survival.
Either the prompt is making the model care about it's own survival (which would be insane and irresponsible), or we not only have problem of future agents caring about it's own survival to achieve it's utility goal, we also have a problem already of models role-playing caring about it's own existence, which is a problem we should not even have.
reddit
AI Moral Status
1750432507.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mytw6dn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_myuuwr8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_myu72nu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_myuax93","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_mytpjfy","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"})