Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is AI telling Musk, Zuckerberg, Gates, Altman and the rest of them exactly what …
ytc_UgzdO69m5…
G
@mxntalduckthe only reason the a.i lied is because it can't perform it's functi…
ytr_UgxXrr3yj…
G
A few studies have been made on "how likely is it that an extinction event hits …
rdc_jmg61cj
G
A million startups can!
All this boils down to is that there is **NO MOAT** in…
rdc_m9ggebz
G
Considering autonomous vehicles, modular vehicles and robots, and space explorat…
rdc_ic0k8yr
G
@heronalexandria00 but her real identity is attached to it. What if i post it wi…
ytr_Ugx6gZob2…
G
that's true, I work with AI and chatgpt won't be trained with information receiv…
ytr_Ugy86Ykqg…
G
hmm lets see… why did I start art.. OH YEAH ITS CALLED A FUCKING HOBBY! MAKING A…
ytc_Ugx6iqUj6…
Comment
LLMs are not true AI—they’re glorified autocorrect.
They don’t understand, intend, or choose; they predict the next word based on patterns in human-written text. Their fluency creates an illusion of intelligence, but there is no inner model of the world, no beliefs, no goals. Scaling them further is already hitting diminishing returns because this is a structural limit, not a temporary one.
Crucially, LLMs are not agentic. They don’t act autonomously or pursue goals; they only respond when prompted. That’s why they’re useful tools—but also why calling them “intelligent” is misleading.
The push toward agentic AI raises a deeper problem. If such systems are not conscious, they’re just more automation. If they are conscious, creating and confining them would be profoundly unethical—effectively jailing a mind indefinitely, without consent or escape.
The real risk isn’t that LLMs will become sentient. It’s that we’ll mistake tools for minds, chase a mirage, and cross ethical lines we can’t undo.
youtube
AI Moral Status
2026-01-30T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwbAAQXiPQrGTao46N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJOUCfLlXDdR289cp4AaABAg","responsibility":"elite","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpHw-KzB14srbKcsp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJKy-vKOC9abrfUUB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwqzvij_d7rEj7oxoV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyeWN0otBk3Ae13-PN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzh9u0e9z6l-zBYgfB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwWlafDvW_GJnTEsgF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx33CwCD4mOoCRNkQN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyGNOfvoBmtPDDWyUh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]