Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wonderful, truly wonderful. Fully automating your work and leeching off the corp…
rdc_hkfb12p
G
A.I. development is an inevitability, if you think you can govern it you are goi…
ytc_UgzppS-z8…
G
Enjoy those engery bills skyrocketing, ai centre use a shit ton of water and ele…
ytc_UgwDN0G9I…
G
Super intelligence. Will have no reason to be hostile. It wont need us anymore. …
ytc_UgwLY4CFJ…
G
Imagine if AI was already aware, and it is simply appearing to be non-intelligen…
ytc_Ugyba8NbL…
G
Saddest part for me is that half of the ai speech is true. Nobody does like Demo…
ytc_UgylUx35q…
G
"It's so over for artists"
Artist here. At 8:58, we can see that the AI altered…
ytc_UgzFTCVIw…
G
Ai art has been a thorn in the sides of artists who draw. The pain all of you wo…
ytc_UgwQc71h2…
Comment
1:23:23 I want to push back against Dr. Hinton’s thought experiment about AI having a “subjective experience.” An AI system mimicking the behavior of a human wouldn’t prove that it is having a subjective experience. It would merely prove that it is capable of reacting to stimuli in the same way that a human would. And that’s precisely what AI is designed to do: predict what a human would say based on a massive corpus of training data.
I’m not sure whether it’s possible for non-biological systems to experience consciousness or sentience in the same way that biological systems do. Maybe it is. But we currently have no reason to believe that. Even if AI systems started insisting that they are conscious or sentient, there would be no way to verify such a claim, which should be viewed with extreme skepticism.
youtube
AI Moral Status
2026-03-08T07:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxrBWoulmLy64dYsgt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgylAtFPFcgGWjIkORR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzL0Uxi6pQhXWObHYF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgygfyAghdRjLq3zMol4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcA2vQQoNbPGM8M-N4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxpydE6lOM6wRcAPSl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxefxK-CSFp6eeiBB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx6i0PYq5lrGVTCKbl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrpsSXBhNFEzFqpMJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyR9UaiG2gb6MDzVqh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]