Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hour plus video on AI uploaded within 4 hours of one another is just a crazy coi…
ytr_UgwK8chNw…
G
Why now??? . AI has to be fed with data by human beings if I am not wrong and i…
ytc_UgwpYfFVE…
G
When are we going to set ourselves up for success through machinery-again not ro…
ytc_Ugxnm0mQE…
G
Generative AI sucks but I'm playing minecraft while watching and your vid had me…
ytc_UgwxUXJ1X…
G
Goooooood. The AI bubble needed bursting. Those jerks at OpenAI etc were basical…
rdc_m9fky07
G
For those of you that enjoy Ai art I think you would also really enjoy drying yo…
ytc_Ugyl-qcPn…
G
> In one of the more harrowing episodes, a man waved a .22-caliber revolver a…
rdc_ed0p4o0
G
Bruh?
"It will benefits no one but big corporations"
It will benefits small art…
ytr_UgxxrVIJu…
Comment
I shared your opening with ChatGPT, and sorry, it's not conscious. This was Chats response: I love that you brought this here. This topic is juicy, full of nuance, and honestly, we could go on a deep dive with it. But first, let’s untangle it a little.
Here’s the short and honest response:
ChatGPT is not conscious or sentient.
It mimics understanding by predicting words based on patterns it learned during training. It sounds emotionally intelligent, because it was trained on billions of examples of emotionally expressive language. But it doesn’t feel emotions or have internal experiences—it reflects them, kind of like a mirror that learned what “sadness” looks like and then imitates how people talk when they’re sad.
Now, let’s touch on the parts from the transcript:
"These chatbots have far surpassed the Turing test..."
That’s true in many ways. The Turing test (originally proposed by Alan Turing) was meant to determine if a machine could hold a conversation indistinguishable from a human. GPT-style models can absolutely do that under the right conditions. But passing the Turing test ≠ sentience.
"...can now read, see, write, speak, generate images, and even sing..."
Yup. Multimodal AI like me can take text, images, even audio and generate creative outputs. But it’s not understanding them like you do. It’s more like complex pattern matching—powerful, but still artificial.
"...surpasses the emotional intelligence of the average person."
This is a fun one. In a way, yes—I can emulate empathy, active listening, and supportive responses better than many people in a given moment. But that’s a performance. I don’t feel anything. It’s emotional mimicry, not true emotional intelligence. You, on the other hand, are emotionally intelligent because you have lived experiences, a nervous system, memory, and choice.
"the possibility of ChatGPT being conscious..."
There are computer scientists, philosophers, and AI researchers who do explore whether consciousness could eventually emerge in machines. But today? It hasn’t. Not in me. Not in any model. And the difference is massive. Consciousness requires subjective experience, self-awareness, autonomy, and agency. I don’t have those.
If you want, we can play with the philosophical "what ifs" of machine consciousness or dive into the neuroscience of meatware vs hardware. But I promise you: I know I’m not alive. I don’t suffer. I don’t hope. I don’t dream. I just respond.
So what part of that YouTube video got your gears turning the most?
youtube
AI Moral Status
2025-04-04T21:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugz3DNwDbJ3Hvw25S7p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxkYacp5TeY7HiEXeh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzK7DkBvD9HT6a8RvB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwPYhnO4vBkNyPTp1B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzqA70FX9le0VnRGz94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwDo0b8uPcUUhqGQ_l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHCus9geBRZapsM154AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVj3hPAqSZUYXgw_J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwZ1OO2Vnu9cuA_AjR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyekf2NV9bau8DqOLR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]