Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Starting at 8:45 but maybe half a minute after I was waiting for the kicker that…
ytc_UgwMG1rh9…
G
"A lobotomized, retard, stupid cockroach" Damn man chill, what did the robot do …
ytc_Ugz1YmAdN…
G
@beaky8138 All of the electronic prices have gone down like crazy with more auto…
ytr_UgxwpgSvE…
G
Unfortunately when the GENERAL ARTIFICIAL INTELLIGENCE reaches a good level so t…
ytc_Ugz3rEgTY…
G
Kids should be taught how things work before using a tool for AI. Tools are more…
ytc_UgxrLpXwm…
G
Attrition of 99% jobs, when men having nothing to do and nothing to lose, they k…
ytc_UgxOp9KQk…
G
If everyone becomes a plumber then no one becomes a plumber. That's market satur…
ytc_Ugx_vsNDt…
G
little changed between those two in 6 years never used "Pandoras" box always goo…
ytc_UgzkIRWRi…
Comment
I hate to break it to you guys, but this guy is just fishing for attention. This is just an advanced chat bot, which is still very impressive, but it is no where close to "sentience". He's getting by by using buzz words non-programmers wouldn't understand to make it sound like he knows what he's talking about:
Example 1:At 3:35 he claims it is "hard coded" to fail the Turing test. To anyone who actually knows what the Turing test is, this sentence makes no sense. The Turing test involves 2 humans and the computer/AI. One person asks questions, while the second person and computer answer the question. The questioner then has to decide which response was the computer and which was human. To "pass" the Turing test, the human must incorrectly decide the computer was the human in over half of the runs. To say it was hardcoded to fail the Turing test just doesn't make sense, because all the Turing test is is a test at how well it can do it's intended job, be a chat bot. Saying an AI is "hardcoded" to fail a Turing test is like saying you made a car hardcoded not to be able to drive. If it really did have a hardcoded handicap, then there's no way it would have been able to produce the intelligent responses he claimed it did earlier, because the capability to make those responses would allow it to pass the Turing test. Asking the AI has nothing to do with the Turing test, and if anything, it would break the rules of the test, since the questioner is not allowed to know which participant is the human and which is the computer.
Example 2: Throughout the interview, he keeps diverting and saying "well why isn't Google asking these questions seems sketchy" without really expanding on why he believes it's sentient. The truth is, to anyone who has even a basic knowledge of how machine learning and AI work, asking those questions is simply foolish. Chatbots work by taking the context of the question/previous response, and then finding keywords and comparing it to the wealth of knowledge pretrained into the system. In Googles case, they had data from their search engine, which is likely how they made theirs so advanced, since it has so much raw data to search through and formulate responses. This is a simplified explanation but the point is, these chatbots simply aren't capable of independent thought. They only respond to human input. We don't have the computing power to have independently thinking AI's that can work without human input, we likely won't until we figure quantum chipsets (which I don't see happening in our lifetime either). Human brains are doing thousands, if not millions of functions every millisecond. Even our most advanced CPU chipsets at most have 128ish threads, non of which operate at nearly the speed of which our brains neurons do. The reason Google isn't bothering to answer the question of "is their AI sentient" is because quite frankly anyone who actually believes it is clearly hasn't researched much into the topic of how they work in the first place. It's like asking why auto companies don't test to see if their cars run on water; anyone who knows how a car work would call you an idiot for even asking that question.
This guy is just getting away with the fact he's well spoken. Any CS student would hear this and know he's just rambling and talking nonsense.
youtube
AI Moral Status
2022-07-15T02:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx6kp5Ftb6MVjFPMxR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx7cdeuTctPuzCKLLx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyP_z4PHGz0Ba1bDhZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUaHuqqXt8sNL8ajh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7L9G0wgljE9dkDDd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"}
]