Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I recognized it as AI, but not because it looked fake, but because it looked lik…
ytc_Ugz6s6L9g…
G
I'm somewhat pro-AI; I think that as long as we create sentient machines, it doe…
ytc_Ugxe_Apbk…
G
Nice alarmist clickbait. The current AI is less sentient than a confused fruit …
ytc_UgypteodW…
G
It’s definitely an intriguing moment! Sophia can come off as a bit intense at ti…
ytr_UgxMzZ9Qe…
G
WOW 🤯 the non-alive robot wants to destroy/unalive real humans that they are mim…
ytc_UgwLnMBib…
G
What I sound like a socialist if we’re a second, I considered the thought hypoth…
ytc_Ugxmgl2uJ…
G
Sorry but it is cheaper and quicker. I used mid journey for one month ($30) and …
ytc_UgwOKr9zb…
G
AI is just the excuse to FIRE older (more expensive) employees (and any one mana…
ytc_UgxaTmjts…
Comment
35:53 Bro, no; *I am confident it is not having internal experiences.*
Consciousness is almost certainly related to the Thalamo-Cortico network & the TRN's inhibitiory function. There is nothing like that biological network's function in how the current tech functions. Nothing even _close._
And sure, we can learn about how our conscious experience functions by understanding _the difference between_ how our brains function and how these machines function, but we can not learn about the conscious experience through direct observation of it in these machines because it is not there. Looking for it in these machines is a waste of time. We can't even be certain that _other
people_ are conscious; we are forced to take their word on it. (I can not observe someone else's conscious experience 1st-hand.) Why erroneously extend the circle to contain an object that exists even further outside the existing somewhat trustable category (especially when again, that object does not posess _any_ of the patterns we can already _strongly_ associate with consciousness)?
As alluded to earlier, it is quite possible (and I'd even go as far as to say most likely by a significant margin) that an understanding of how consciousness functions is the key to alignment. It _is_ what trains our thoughts to be the way they are, right? Maybe AI isn't aligned because we are missing the part of the puzzle that aligns it. (It is also a part of the puzzle we can observe aligns humans with their own interests: have you ever changed your own beliefs / actions for the better based on something that went on in your conscious experience?)
youtube
AI Moral Status
2025-11-25T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzi_mIBEDK2nqf1Xjt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw9dgpgeGBdmscrh8t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwpW2BjFOkyPlW11214AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxLMNStx9B9S5BWVUZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwUKr6mOhDvk1MZNEh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzSKYjZ5kxOkmO83ch4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzeDeNRPmGWIQ9oddt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0SHuBVrP1las7Ngd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugwf02DVA3c2wXSLkWN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxhB8j2OM60NEMB5-R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]