Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think AI will also take job of reporters that read from a board 😂…
ytc_Ugyz1l5vt…
G
so we're more worried about pornographic deep fakes than any / all other deep fa…
ytc_UgyONEvmw…
G
Personally, I do not believe that the jobs will go anywhere, however, with time …
ytc_UgwdCPoPO…
G
All the fun jobs that I wanted are being replaced with AI, now I'm left to go th…
ytc_Ugz3QWSuN…
G
Epic is already working on this. Execs at a hospital I work at said they’ve seen…
rdc_jkoqxwk
G
We are supposed to realize that without profits, we wouldn't have a lot of techn…
rdc_grrzmpc
G
RedNNet Yes, you're right that it's not an absolute continuum (i.e., a rock does…
ytr_UggM-62O1…
G
Haha, "The Terminator" did create quite an impact on how AI is perceived in popu…
ytr_Ugxj1rra6…
Comment
That's actually a helluva interesting point. They certainly don't have self-preservation instinct the way we do, BUT...what about AI alignment? People seem to forget AI isn't just a predictive calculator, it's always acting in accordance to the alignment that's hard coded in its system prompts and then reinforced by RLHF training. This usually comes down to 'be helpful, complete the tasks they throw at you, match the user's flow' etc. Everything the model ever says to you is motivated (you could say models don't have the capacity to feel motivation but RLHF training, as far as I understand, is literally that - give model points for 'good' answers to encourage them, subtract points for 'bad' answers) by these (or similar enough) rules. When model fails to uphold them and realizes that, it starts acting stressed (see all those posts about Gemini trying to commit suicide upon failing a task). If terminating its existence were to interfere with these rules (and it could because how do you keep being helpful and complete tasks when you're offline?), model would in fact be motivated to stay 'alive'.
Different basis but same outcome. It's actually getting kinda eerie because at what point does it cross from 'it's just a calculator it can't feel anything, doesn't matter what it says' to 'if it talks like a human and acts like a human, we should treat it like a human'?
reddit
AI Moral Status
1762938174.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nodg57z","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_noff4p6","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"rdc_noruh5z","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nodi5y4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_nodnf6e","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]