Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Did not find it in his linktree. But the AI he's talking about is called Mindgra…
ytc_UgzCXjHio…
G
The idea that AI pictures are perfect is funny. Count fingers, count teeth, chec…
ytc_UgzXOpCzT…
G
No, AI don't need to, and someone needs to shut down AI art everywhere, it's hec…
ytc_Ugxzsjlfu…
G
So this might sound controversial for some, but I don’t mind AI art, it’s only a…
ytr_Ugx-I827w…
G
You missed a point slightly.
While jamming can provide cover from drones - but t…
ytc_Ugy7t3gD6…
G
@joeliza4047gonna be a long time before you get a robot to climb on its belly t…
ytr_UgyObU5Ar…
G
Xpeng walks looks a bit unsteady, like he are having a seizure. Tesla’s robot wa…
ytc_UgysqRpQp…
G
Definitely the best interview I've seen on the Internet very very intelligent pe…
ytc_UgyVas1tj…
Comment
At 4:45 we get the (possibly) real and valid reason why Blake raised these questions. The discussion after that is somewhat sensible, and many of these ethical questions are important, but then again at 10:13 we fall right back into silliness. If he was using this charade to bring attention to other ethical questions, he's doing a poor job; as Meg Mitchell also pointed out. If he actually does believe, based on his conversations (of which he have posted parts on his blog), that Lambda is sentient, he does not have a very good understanding of the mathematical and technical principles these systems are based on (he also notes on his blog that he does not have access to any source code, and cannot look under the hood). Neither does he seem to have done a very good job of testing whether it actually understands what it process. At least, for starters, he could've asked it any number of Winograd schemas or derivations of Bongard problems, or just simple things like, "what did the word 'it' refer to in the first paragraph of my second question about topic A?" Lambda would have stood no chance on these types of questions, at least not in general. If you really think it "deserves" a Turing test, wouldn't you construct one, i.e. where you try your best at "breaking" it; asking questions that it shouldn't be able to answer just by "brute force statistics"; actually test its understanding of the meaning of what you/it are writing. No, instead he wanted to "raise awareness" and bring AI ethics into the light, but imo it's just distracting and frankly rather embarrassing. He also risks pushing a lot of people who doesn't know better (frankly, why should they) into these silly beliefs. If anyone is actually wondering whether Google created actual consciousness, it did not. If you have read this far, you might be interested in reading books like Artificial Intelligence by Melanie Mitchell or the substack-blog by Professor Gary Marcus, for a sensible and thorough treatment. To quote Oren Etzioni: "“When AI can’t determine what ‘it’ refers to in a sentence, it’s hard to believe that it will take over the world.”
youtube
AI Moral Status
2022-09-19T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_UgxwGSLLMeIAI682jhZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwBwDOFdEjNz4Ir-194AaABAg","responsibility":"elite","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzolWtZTtMrmO4hMLZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4Tp6dwManfXpF9Ap4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyKs6UlxVijaP1UPxB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}]