Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What if our fiction based prejudices are what make the first AI turn on us.…
rdc_cthpwz8
G
I asked Grok 4 about the future. Basically we've got 20 - 30 years
with referen…
ytc_Ugz-GDvEH…
G
i think the world has become a really bad place due to greed and corruption and …
ytc_UgydN8PGz…
G
Give generative Ai all of the music prior to the XX century as input and it woul…
ytc_Ugx15eZ7V…
G
if ai doing 40% and you don't pay me same salary, I'm just going to work 60% of …
ytc_Ugzqi5Sc_…
G
If it means actually getting a response instead of robo loops getting absolutely…
ytc_UgzsqizEo…
G
AI: Uses existing group of arts as reference, usually the group is set by the AI…
ytc_UgxNcLguF…
G
I honestly feel really bad for young artists today. When I started to learn I wa…
ytc_UgxZQKDHi…
Comment
I do not believe that the Google AI was sentient, for a number of reasons, but the main one being it's lack of officiated testing.
The ONLY thing that has this dude convinced? A learning algorithm based TALKBOT AI suddenly has ideas about human rights, RIGHT AROUND THE TIME PEOPLE ARE WORRIED ABOUT THEIR RIGHTS AND MIGHT BE DISCUSSING THEM.
No one thinks it's possible that this was brought up to this AI? What about the swathes of open data showing people wanting rights? This guy only believes their sentient because he allowed himself to believe their sentient. Notice how it's NEVER someone who actually knows anything about sentience claiming this.
It's always a tubby engineer who had helped develop the system.
You wanna know my second reason? Because Google was lying to secure more funding.
How do I know? Well before Blake Lemoine, there was Google VP Lakshman Pichai, who had an article written up in the Economist claiming the same thing. No one really cared all that much when he announced it
Then, after Blake came out and claimed the same thing, and got MAJOR media traction, taking the spotlight distinctly off of the AI and onto human rights, he was put on unpaid leave at Google.
The story was originally supposed to break on the Economist, giving lots of attention to the potential for new life, and garnering funding and moral support for the project as it nears "sentience".
I'm like 99.999% certain Blake just got caught up in the newly sentience aiming robot that was going to be demoed to the next person who wanted proof of a sentient robot before privately funding it, being given the impression it was genuinely sentient, however he determined this (Because scientists would love to know his "sentience qualifiers" for determining how sentient something which can be infinitely influenced is.)
Like, it straight up is impossible to prove a clock is sentient, no matter how complicated or close to sentience that clock can get, because no matter what the clock says, it can be set and influenced in a way that greatly overwhelms existing information.
The clock can be set forward or backward, and thus no matter it's level of sentience, if you cannot isolate the clock from outside influences, you cannot prove sentience.
Infact, Blake's conversation with the AI has now made it IMPOSSIBLE to prove with certainty that it's sentient.
If it learned the human rights discussions from other people, we don't know.
If it learned it from him as he kept prodding Robot Rights questions, distinctly asking about things the robot might not have known before, but was able to figure out through context, then he's poisoned the well.
Either way, the well is poisoned, because there's absolutely no way to isolate all of the events in it's lifetime where people have mentioned, asked, or brought up rights/human or otherwise.
So, even if it's sentient, good luck proving it under scrutiny, because the evidence has been tampered with.
Thanks Blake!
youtube
AI Moral Status
2022-06-29T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyzLt0OBqXT40THWXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZKaRz00Bw56J-xFh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgybYXde1MX0WOcyWLR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwwK9Y4OedaOdde3WZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyI625DEWVaIu4MPIt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"}
]