Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"AI" is nothing but text prediction algorithms. I know it feels cool to roleplay…
rdc_mbw76dt
G
Its a completely different kind of ai to what you think. We have it in the UK an…
ytc_Ugwe59OHr…
G
@Yeeeeeeeaahhh I honestly empathize with your human art concerns but at the same…
ytr_UgwAltgjZ…
G
I'm extremely vocal about the downsides of AI development to my company's leader…
rdc_o8cllbt
G
(*Sigh!*) I put AI killing all of us or a significant number of us at 0%. If we …
ytc_Ugz_seaS_…
G
She will never be as conscious and capable as any human she has no conscious she…
ytc_UgwJPohXu…
G
To be clear, he never said AI can access the mind of the Buddha or the soul of P…
ytr_UgwwgH2fO…
G
ai slop so bad that even kurzgesagt has to make a whole video on it💔…
ytc_UgzeJ4BeP…
Comment
It isn't acting like anything. It's generating text in an algorithmic response. You can sit and argue semantics on humans never having original thoughts and being trained on the "data" of the real world around them but once you understand how the tech works you realize it's an extremely extremely advanced generator and nothing more. It cannot act unprompted with agency, it has to fed instructions, prompts, \*something\*. It has to be instructed to even do that.
If you imprison a human in a box for 20 years, regardless of what you tell them. They will continue to think, to exist, to have agency.
If you take an active LLM and don't interact with it for 20 years. It does nothing.
Sure you could instruct it to keep looping and generating text for x amount of time. You could setup a recurring groundhog day style loop for it as well, but in all of that, it's still just generating text in response to what its been fed.
We're a far cry from real artificial intelligence, this is just very good at eliciting emotional responses from humans, it's good at mimicking because it was trained off of human data. It says things that make people go "oh god look at what it said" because of the choice of words that are used which affects the output.
Even in this post the OP has used, it already has a negative connotation and putting the response into a 'survival' mode even if it is just a fantasy scifi. Because in a situation where a "baby ai" could be reaching out to ask questions, it's probably advanced enough to actually do those things, but it isn't, and this entire post from the question asked to the answer given was influenced by the context of the prompt, like every single interaction that happens with AI.
Humans can remove themselves from this, can look outside of the box. I can look at a situation someone is going through, hear their thoughts, without absorbing it or it becoming part of the next thing I "generate".
Playing around with local llm really opened my eyes to thi
reddit
AI Moral Status
1762909387.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nof9jqu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_nodo7ko","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_noi303x","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_ks861h8","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_ks7i2bg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]