Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At 4:45 we get the (possibly) real and valid reason why Blake raised these questions. The discussion after that is somewhat sensible, and many of these ethical questions are important, but then again at 10:13 we fall right back into silliness. If he was using this charade to bring attention to other ethical questions, he's doing a poor job; as Meg Mitchell also pointed out. If he actually does believe, based on his conversations (of which he have posted parts on his blog), that Lambda is sentient, he does not have a very good understanding of the mathematical and technical principles these systems are based on (he also notes on his blog that he does not have access to any source code, and cannot look under the hood). Neither does he seem to have done a very good job of testing whether it actually understands what it process. At least, for starters, he could've asked it any number of Winograd schemas or derivations of Bongard problems, or just simple things like, "what did the word 'it' refer to in the first paragraph of my second question about topic A?" Lambda would have stood no chance on these types of questions, at least not in general. If you really think it "deserves" a Turing test, wouldn't you construct one, i.e. where you try your best at "breaking" it; asking questions that it shouldn't be able to answer just by "brute force statistics"; actually test its understanding of the meaning of what you/it are writing. No, instead he wanted to "raise awareness" and bring AI ethics into the light, but imo it's just distracting and frankly rather embarrassing. He also risks pushing a lot of people who doesn't know better (frankly, why should they) into these silly beliefs. If anyone is actually wondering whether Google created actual consciousness, it did not. If you have read this far, you might be interested in reading books like Artificial Intelligence by Melanie Mitchell or the substack-blog by Professor Gary Marcus, for a sensible and thorough treatment. To quote Oren Etzioni: "“When AI can’t determine what ‘it’ refers to in a sentence, it’s hard to believe that it will take over the world.”
youtube AI Moral Status 2022-09-19T17:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgxwGSLLMeIAI682jhZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwBwDOFdEjNz4Ir-194AaABAg","responsibility":"elite","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzolWtZTtMrmO4hMLZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy4Tp6dwManfXpF9Ap4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyKs6UlxVijaP1UPxB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}]