Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Eric Schmidt's three cases where we should switch it off are so silly that I can't believe anyone takes him seriously. Going in reverse order, these tools are already connected weapons. Go ask Palantir about that. Next, recursive self improvement is already here also. The main key to it was the understanding to *not* feed the LLM a diet of LLM material (which causes the model to collapse) but to further restrict the training information to known good sources. The only way that humans are in this loop is the certification of "known good". Seems possible that this requirement wont be superseded. Lastly, the silly idea of "if they start talking to each other in a language we can't understand" misses the absolute fact that they're currently not reasoning in a way that we can understand. We literally can't understand how it "talks to itself" but he's "worried" about the invention of a new language? I think looking at this from the "danger" perspective is all so silly if you actually look at it from a real capabilities perspective. Once you know how LLMs actually work, you realize there is a very hard cap on their capabilities that falls way below "intelligence" or "reasoning" but far above the ability to fool people through the appearance of complexity. It's pure Expert Fallacy all the way down.
youtube 2026-02-27T16:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy745KW0bwYwXPOkeh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzkSGCDmiAk8-rJqm14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzD5yoHMr7EsRS3wTN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz3feiKK9KBfS9LBBF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxXoJBy1OZU0JvOdKR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwC8dxH78M2d20gxmh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzFS68sh-r2X22kRpN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxmZujFJdcwUuAaTmp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwMFqmFBbpQUhoOPJZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxoIJkd3D4g6APxhoh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]