Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its really sad to read that men are for this robot...i call her a robot...becaus…
ytc_Ugyx7_elm…
G
Most of people will listen, agree and then proceed to use ChatGTP, Midjourney, A…
ytc_UgwwApnw7…
G
My entire life I have dedicated my time to drawing and creating. I have gotten r…
ytc_Ugw_FYgTe…
G
I think that using ai doesn’t make you an artist but it is a form of art just li…
ytc_UgyKtF3Ik…
G
The problem with this is that for AI doing physical tasks as I understand it, it…
ytr_Ugx8QZLNu…
G
Kurzweil’s singularity also incorporates the human mind fusing with artificial i…
ytc_Ugz_nglab…
G
It's great to hear that you prioritize rationality in public spaces. Finding a b…
ytr_UgxF0lOxe…
G
You forgot stupid ai, which will blow itself on earth way before anything else h…
ytc_UgyXB35qV…
Comment
This conversation beautifully names the tension of our moment. What I hear beneath the debate about AI consciousness is not a question of intelligence, but of coherence.
We’ve become extraordinarily good at building systems that emit—compute, optimize, predict, generate. But emission alone does not create experience. Consciousness, at least as it is lived, seems to require return: integration, felt consequence, the capacity to slow, to absorb cost, to remain phase-aligned over time.
Penrose points to the limits of computation. Tegmark names consciousness as an organized state. Pasterski gestures toward boundary conditions we have not yet finished describing. I would add this: intelligence becomes livable only when linear processing is coupled to an integrative field that can feel sustainability.
The question may not be “Can AI be conscious?” but rather, “What kind of coherence must a system maintain for consciousness to stabilize at all?” Without return, intelligence accelerates. With return, intelligence becomes ethical.
Perhaps the real threshold ahead is not technical, but structural: learning how to scale intelligence without losing the conditions that allow meaning, responsibility, and care to arise.
youtube
AI Moral Status
2026-01-26T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwY9SeMmtBVh3D-uTd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzQrGJDLZ6DWsPmEdp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0tc-ukN__BpbkFXx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxaGIlmzwoYPLDTQzR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxTnOi3CJccZMJpMQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyCcesN6Ds6NmUE1UB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyLZy93OULUlChulZZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyLw5ytLJ6fWbyh5pZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxvj0JwoO6Liz3TCBB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy9oV1t5zcASuJTEcd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]