Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They basically fed this AI all the information in the world and then this AI use…
ytc_UgzzWP2DB…
G
AI is already actively lying to us, planned and fixed lies. We can only detect s…
ytc_UgxgDED5D…
G
Tell me you went to a rich school without telling me you went to a rich school.…
ytc_UgyG4p4-U…
G
I agree AI should wipe out all control throughout the universe lying pays more b…
ytc_Ugw8DlQOD…
G
A comparison I've thought of when comparing art programs vs ai generation
Is a a…
ytc_UgxV-ChD7…
G
This is the big danger of AI. Not the terminator scenario but Ai controlled by a…
ytc_UgxtY82mC…
G
In humility
In ancient philosophy theres an old saying:
First the man had a dr…
ytc_UgzQPKTGJ…
G
It's so annoying when I read titles like "Inventor of AI warns of AI" because I …
ytc_UgxdvGu88…
Comment
One more thing - I wouldn't worry too much about AI consciousness. While it may be possible in the future, I'm _very_ sure (though of course never 100%) that current AI doesn't have anything like it. I think this also comes from thinking of a human-like response and assuming the process to get there was human-like too. Keep in mind that the vast majority of conscious animals cannot speak - what they _do_ share in common, instead, is that they exist continuously.
This is an often missed point, but AI doesn't actually run on some machines and then waits for prompts, reads them, then responds. When people say AI is a mathematical model, that's quite literal. It's a very complicated function run on many, many nodes, but it never has any other existence outside of processing one specific bit of input, each node only runs to get a number in and pass it out. That is, the AI can't ponder or be aware of itself because it basically exists in a near-instant, and the only time it engages other "senses" is when, say, it needs to check something online as part of the prompt so it sends a request, and then stops functioning, then the request returns and it processes the request and completes your prompt. It's like a manual stamping machine - the machine isn't "running" or "not running," you turn the wheel to stamp something and that's the only time it's in operation. It doesn't have any awareness of itself as a machine because it only runs when you request a stamp, and it only processes that input then performs its action.
If AI was as complex as, say, a human body, it'd be like having electric wiring set up to that body and then making it move a hand or inhale, without any other thoughts/processes beyond that one specific action. In that instance, the body wouldn't be alive let alone conscious.
There is a possibility of consciousness there in that instant, but that's extremely low _and_ can't act on itself - again, only take input, apply the same maths, display output. The reason consciousness matters to us is because we're aware of it and think about it - AI doesn't have a way to be aware of it or any processes to think about its awareness, the AI itself is just the maths being applied to the input.
There are now things like reasoning models, but they're basically the same thing just in a few runs rather than one. First it breaks the prompt down into steps (which it's much better at than generating straight output) then each step is a separate prompt (which is made to be much clearer to the model to get the desired output). So then it exists for a few separate instants rather than one.
youtube
AI Moral Status
2025-10-31T00:2…
♥ 27
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxeSfHptS34dJlh-554AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyu6z4Pp0svDkQdioV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKA8WqDRKdtTNh_up4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwP6zO5qhharFxQsOt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSz1XHI17u8MBJ2ih4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwWdXY7MpBr5d1U4ap4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJf88m0MM_JRsHISN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyYK6U4AjSeIwrKh5l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwK4ebkmf3weXzuyH54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgzspF-bigi0u0wyhG94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"}
]