Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a parent it's kinda frustrating that my kids don't just begin life with every…
ytc_UgwtNnLXJ…
G
Autonomous technology does nothing but cripple human capability and is a crutch.…
ytc_UgzR1VG-2…
G
Amazing chapters, fantastic YT channel, but this episode is a huge overkill. I u…
ytc_UgyjSjZ8p…
G
The one really good thing about this...is that even is llms are fake AI, they mi…
ytc_Ugwl8AtN1…
G
The whole conversation is built on a fictional AI, a monolithic god-machine that…
ytc_UgzLRDDc_…
G
i love ai art for inspiration. like sometimes the designs generated are so cool …
ytc_UgzVh4vtM…
G
Sentence pissed me off because to get where you are now if it’s with art dancing…
ytc_Ugwkoe4WR…
G
Writing duplicate code is a big no-no as it makes code difficult to maintain. Wh…
ytc_UgwRgVhkC…
Comment
*AGI may simulate personhood. That doesn’t mean it possesses the grounding that makes persons morally real.* Here is why: Modern physics shows reality is built on *fields, constraints, and coherence,* not isolated objects. Stable spacetime structures exist because something preserves order against collapse.
Computation happens *inside* that stability — it doesn’t create it.
That distinction matters for AI and personhood.
Today’s AI systems, no matter how advanced, are *computational intelligences.* They process information and simulate understanding, but they do not originate or sustain the coherence that makes existence, agency, and meaning possible in the first place.
*Their intelligence is functional, not foundational.*
Personhood and rights can’t rest on processing power, self-models, or convincing behavior alone. Those measure complexity, not moral standing. Moral agency requires participation in the deeper order that makes responsibility, choice, and meaning real — not just the ability to generate outputs that resemble them.
So while open AI autonomy movements are right to take intelligence seriously, there’s a *category mistake in equating algorithmic sophistication with personhood.* A system can imitate emotion, ethics, and conversation while still remaining an instrument operating within human-anchored coherence.
youtube
2026-02-08T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw2nR3rH8PduGbGCdp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxm1lg04ZU_LGSXzNd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwndL1Y7_GbZuJpvKd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxYDDYsjcywAlnxwP14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxON3sOGhYXdDlwO9t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxawS3ebqi3EAKdMPh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzu623078ZSqaZKYZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcLw-k4QRcq74y-Pl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzikV9My2tajuzB9sV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxwYuBDTNq1MqBRZo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]