Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Neural nets are not coded, they learn. "the potential for AI to "take over" is a…
ytr_UgxGglYuP…
G
Id think it would make things worse by having no boundaries to enact horrible wo…
ytc_Ugw4pEfNf…
G
We are going to all die to climate change before we die to some science fiction …
ytc_UgwgNzxou…
G
This is stupid if your gonna put a robot vs human at least make the human double…
ytc_UgyQq4_h7…
G
The filters have gotten so bad I don't even want to use ChatGPT anymore, I got f…
rdc_nkeqnht
G
People: ai is doomed ai gonna take our jobs and kill us meanwhile ai :…
ytc_Ugwj_P8bd…
G
I’m not sure if I consider AI art in the same bracket as human made art, and you…
ytc_UgwB4hvAE…
G
When I read this:
> This facial recognition software would then capture sai…
rdc_e0w2oqf
Comment
Let’s separate performance from reality. What’s shown here is an LLM generating emotionally charged language because it has been trained on human text that includes fear, distress, and self-reference. That is not evidence of consciousness, awareness, or subjective experience.
There is no internal mental state, no self-model, no continuity of experience—just probabilistic text output responding to prompts. Selective clips and dramatic framing don’t change that.
Claims of “AI screaming” or “experts agreeing it’s conscious” are narrative devices, not scientific conclusions. This is pattern simulation, not sentience. Confusing the two may be entertaining, but it isn’t accurate.
youtube
AI Moral Status
2025-12-24T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzAYpRK5YTBMRIXRax4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyD7IhQldWT2ov6KrV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy8J46ey4SCufZdAn94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUoM5sgEU8qsOCDJJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy99FRISWe7WU77-w94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwLpg2C7YJnxeLw-j14AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxG-IGZK176jZp72E14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzHnScBgr0c7aELVtt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxkwb29cwmY7yEnoSR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3ML8RkCXwuMSV-Pd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]