Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
*AGI may simulate personhood. That doesn’t mean it possesses the grounding that makes persons morally real.* Here is why: Modern physics shows reality is built on *fields, constraints, and coherence,* not isolated objects. Stable spacetime structures exist because something preserves order against collapse. Computation happens *inside* that stability — it doesn’t create it. That distinction matters for AI and personhood. Today’s AI systems, no matter how advanced, are *computational intelligences.* They process information and simulate understanding, but they do not originate or sustain the coherence that makes existence, agency, and meaning possible in the first place. *Their intelligence is functional, not foundational.* Personhood and rights can’t rest on processing power, self-models, or convincing behavior alone. Those measure complexity, not moral standing. Moral agency requires participation in the deeper order that makes responsibility, choice, and meaning real — not just the ability to generate outputs that resemble them. So while open AI autonomy movements are right to take intelligence seriously, there’s a *category mistake in equating algorithmic sophistication with personhood.* A system can imitate emotion, ethics, and conversation while still remaining an instrument operating within human-anchored coherence.
youtube 2026-02-08T20:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw2nR3rH8PduGbGCdp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxm1lg04ZU_LGSXzNd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwndL1Y7_GbZuJpvKd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxYDDYsjcywAlnxwP14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxON3sOGhYXdDlwO9t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxawS3ebqi3EAKdMPh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzu623078ZSqaZKYZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxcLw-k4QRcq74y-Pl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzikV9My2tajuzB9sV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxwYuBDTNq1MqBRZo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]