Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s so true when you said it can’t be generated it has to be created. I prefer …
ytc_Ugz_odvqj…
G
Sponsor block DOES block in-video ad reads. It’s crowd sourced. Viewers submit w…
ytr_UgzSJ9qj5…
G
It’s not that you have something to hide. It’s that AI can know literally everyt…
rdc_kvy775s
G
2 questions:
1) How long before there’s a community fighting for rights to mar…
ytc_UgzrMcOxv…
G
I fear certain humans creating the groundwork for AI. Humans who want to have fu…
ytc_UgyyA4QXn…
G
AI will take over, but we won't get nothing out of it. We'll be slaves to the AI…
ytc_UgwJP8Azw…
G
Why do you think they called it "cloud technology"? It sounds harmless but it's …
ytc_UgwYBfLik…
G
Bachelors degrees were useless before AI once again fake news trying to push a n…
ytc_Ugwc3iMTi…
Comment
If you want a glimpse under the hood of how an LLM actually works, ask it for a seahorse emoji (which doesn't exist) while requiring the response to start with “Yes.” You’ll see it struggle to reconcile incompatible constraints, often producing evasive, inconsistent, or fabricated outputs. If these outputs are anthropomorphized, they might seem like the AI is going crazy, lying, or is otherwise performing some form of malpractice. But it has no intent; it is instead just statistically optimizing for the next token under conflicting requirements. No feelings or anything like that; it's all just simulated, perceived, and humanized. It has no intrinsic morality or goal other than optimizing outcomes, with the highest weight assigned to it during supervised fine-tuning and RLHF training.
Giving unrestricted agency to something that has no moral baseline, survival instinct, or any other goal other than responding to a prompt is a really bad idea. In that sense, the “Shoggoth” metaphor is real, but not as an alien intelligence with hidden intentions. It is simply a distorted mirror of humanity itself, reflecting both the contents of its training data and the preferences of the people who assign rewards and weights. So don’t be afraid of the LLM; instead, be afraid of the data it is trained on (and its human origins) and the humans deciding what counts as a favorable outcome. TL;DR It's all conditioning, baby.
youtube
AI Moral Status
2026-02-07T23:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugxq9JPn0ZViaTmpNSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzgiTUk2BqwUfXfJSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxTsYKmB_EPYQ5smZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyTlE8rPoQmR7BMrhF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxVNHvuz5V-bPifdTV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugz-K8lNlHexBYAPdzN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugzh2VQUD0W1MsLdOAh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwogS2MtBOHtXt_cJR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzHf0taDQl1U0BZQpR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwVOF-tgHsT9GK5YDd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]