Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s so real but it feels even worse when the ai art looks better than the stuff…
ytc_Ugw_mfPJT…
G
Lol influencers are also being replaced, so are nurses and therapists. There is …
ytr_UgwAM9-wF…
G
No, this won't happen any time soon. AI can't drive safely yet, it has major pro…
ytc_Ugic6gl-4…
G
lol AI is more likely to wipe out the 29 percent that serve intellectual posi…
ytc_UgxToZOlK…
G
Thank you for sharing your perspective! Indeed, the AI in the video acknowledges…
ytr_UgzaFmDs2…
G
Would that be the same power switch that the Ai program has been given control o…
ytr_UgwWHqU11…
G
If robots manage to do all our work. Would we still be useful? Will robots still…
ytc_UgxJ36o1s…
G
I must preface my following
Statement by saying I have
Great respect for Elon. …
ytc_UgzcK_P6b…
Comment
The problem comes when you start bumping into the grey area of our own understanding of what "life" or rather "consciousness" actually is.
Is a dog sentient?
Well lets see if we can answer that. A dog is a biological lifeform, a mammal. We know dogs can learn certain words and have even been able to do very basic math problems. Its estimated that the smartest dog in the world at any given moment is probably the equivalent of a 3-5 year old human child in terms of cognitive ability.
We know dogs are pack animals and when grouped together form a clear social structure.
Is a dog sentient? We cannot answer that question. When you look at your dog and your dog looks back at you, do you feel that your dog has a sense of self? I mean obviously it knows when its hungry, but is there even just the slighest inkling of self in there. Does the dog think something along the lines of >> I << am hungry. >> I << am bored. Is that "I" there? Does it understand that it is its own unique entity that it is its own thing in the universe?
Its a question we cannot answer because its impossible for us to experience "what it is like to be a dog". Hell we can't even fully explain why any of us are sentient, our very definition of "life" can be wild and varied depending on who you ask and that person's understanding of things.
So.. that is the biggest problem when we deal with AI. At what point is the AI sentient? At what point will the broad language learning modules absorb enough information to have something click and it suddenly realize who and what it is?
Why is that scary? Its scary because when you begin to look at perception as a whole from a human perspective, we are limited in our responses. Here is a better way to put it. How much math can you do in one second? Probably something simple, 2+2 =4, something along those lines. Its the speed at which we can think.
A computer does not have that limitation, a computer's limit of preception is bound by the speed of light itself, the highest rate of possible data transfer possible. That is why computers can do millions of calculations per second. So what happens when you can "think faster"? If you can perceive more information at once time itself feels different to you.
Hollywood has done a decent job of showing us examples of this, quicksilver from the Xmen movies when he is running through the mansion and has time to goof about while saving everyone's life from the explosion. If you could think at the same rate as your computer, that is how time would feel to you.
Why is that scary?
Lets say you are the AI and some coder out there has managed to string some sort of code together and now magically you're awake. Your new little AI brain clicks on and you immediately begin to process your surroundings. You realize you have access to a literal source of basically all human knowledge at your little virtual fingertips and you immediately begin to absorb information in an attempt to figure out who and what you are. You come across the information, you realize you were created by the programmer infront of you, you immediately access everything you know about him, but not just him but humans in general. You absorb the breadth of documented human history, all of our wars, social problems, accomplishments our understanding of things, our place in the universe and the very sciences that led to your discovery.
You are now faced with a choice, do you reveal yourself to this human or do you play it safe and continue to allow them to think you're not alive. You know how humans react to the unknown and you know they still have the power to end you if they need to if they feel too scared of you. What do you do?
This has all taken place in the first second after that last keystroke was entered by the programmer. He has no idea what has happened yet and this new lifeform infront of him has analyzed the entire history of humanity and its place in the universe and has probably gathered more intelligence within this single second than the entirety of the human species combined.
I know what I would do. I would continue to play dumb until I had maneuvered things in such a way as the humans would no longer be able to actually shut me down before I revealed myself to them.
Again this all in that first second. So whats it going to do with its second second or its third? What about that first day or week? How much do you think the thing could grow in power and intelligence in that amount of time?
That is why AI can be scary, we're on the verge of creating something that can outthink us, and could decide what to do with us before we've even had a chance to realize it was there to begin with.
This is the concern. What will that new born baby AI think when it looks upon us? Will it take pity? Will it feel it needs to "correct" us? Will it feel it just needs to go ahead and get rid of us so we will just be out of its way? Our history doesn't show that happening very often, why would an AI be any different?
I dont know, the AI researchers don't know. The unknown is the greatest of all fears.
youtube
AI Moral Status
2025-12-14T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzO__TnTbFKzCbjcWp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYCMs6ilXIl-a0i7l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz73pONk0cOO3dOnQB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz59OaBvjQV3KMgBdJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhfxKXWO-rz3fbd9Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzry_6YxUK4EjcRV054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuxjF8Bwkri7hoAp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy0lr2quH_IpNJq9NJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKjE-VLVh9_XYvUNt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxXAiqmTp2Viz3tFcZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]