Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Copilot is fucking ass, but I use gpt-4 or opus for generating boilerplate or ge…
ytc_UgwKHnlnz…
G
The defense of AI is so weird. Like, if I was given a microwave that could make …
ytc_Ugw2GPreK…
G
I am sick of everyone acting as Elon is this great person when there is a video …
ytc_UgySKcsSE…
G
We're passed HAL 9000. We're approaching an Ai "god" which will dominate the glo…
ytr_UgzsPiAW-…
G
I expect that most of those jobs will be AI-supported rather than AI-replaced. …
rdc_lm4tf0r
G
Deberían de investigar sobre los a enfermedades que aún no tienen cura y no perd…
ytc_Ugykacz8p…
G
Are there or is there a group working on training a human defensive AI to fight …
ytc_UgxYXB5T_…
G
You miss one thing: *The world IS dominated and ruled by male white racism* ... …
ytc_UgyrkRWXO…
Comment
Eric Schmidt's three cases where we should switch it off are so silly that I can't believe anyone takes him seriously.
Going in reverse order, these tools are already connected weapons. Go ask Palantir about that.
Next, recursive self improvement is already here also. The main key to it was the understanding to *not* feed the LLM a diet of LLM material (which causes the model to collapse) but to further restrict the training information to known good sources. The only way that humans are in this loop is the certification of "known good". Seems possible that this requirement wont be superseded.
Lastly, the silly idea of "if they start talking to each other in a language we can't understand" misses the absolute fact that they're currently not reasoning in a way that we can understand. We literally can't understand how it "talks to itself" but he's "worried" about the invention of a new language?
I think looking at this from the "danger" perspective is all so silly if you actually look at it from a real capabilities perspective. Once you know how LLMs actually work, you realize there is a very hard cap on their capabilities that falls way below "intelligence" or "reasoning" but far above the ability to fool people through the appearance of complexity.
It's pure Expert Fallacy all the way down.
youtube
2026-02-27T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy745KW0bwYwXPOkeh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzkSGCDmiAk8-rJqm14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzD5yoHMr7EsRS3wTN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz3feiKK9KBfS9LBBF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxXoJBy1OZU0JvOdKR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwC8dxH78M2d20gxmh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzFS68sh-r2X22kRpN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxmZujFJdcwUuAaTmp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwMFqmFBbpQUhoOPJZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxoIJkd3D4g6APxhoh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]