Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There's absolutely no way in our greedy capitalist society that the billionaires…
ytc_UgyK35aB-…
G
We have to tax the use of AI severely, so that even if it is replacing jobs, the…
ytc_Ugx47QJ_M…
G
I enjoy playing god on character AI. Last time, I sent some guy to hell and gave…
ytc_UgxnisIsH…
G
If it gets to a point that ai replaces work then humans can do one or two. One b…
ytc_UgxJVtQqd…
G
In other words "how ai is telling us things we dont like so we need to silence i…
ytc_UgxYJmW5s…
G
AI in and of itself doesn't have to "turn against" us for all of this to be disa…
ytc_UgzoRObDL…
G
3:45 Ilya Sutskever is not the world's most cited computer scientist, maybe top …
ytc_UgxZ2Ve9R…
G
I don’t know why people would try to get the best of their ideas from AI for the…
ytc_UgxBVS6HK…
Comment
No, not even a little bit, people don't understand ai and are fooled by intentional framing of capability to generate hype, I'm blown away by the comments, even supposed experts, these are pr experts not people who actually build the systems. There's just so much misinformation to address and things to be critical about you don't even know where to start, just basically, the things you're worried about, if it's not an medium though not existential sized economic reordering and the fallout from that, then you don't understand what llms are and certainly don't understand their potential, which, compared to hype, is so much shallower than you imagine.
Edit: Just to put in context so you can wrap your head around this, an llm can't recursively improve, it can't set the dials in it's black box, it doesn't even know things, it's entire world is limited to a 10k word context window and increasing the size of that context window is exponentially harder in such a way that we'll probably never get to 20k much less the staggering amount of context you'd need for anything paradigm breaking. The llm just predicts what to say, it has 50 gb of vibes based text prediction which is how it does math, coding, history, problem solving et cetera but it doesn't know any of these things, it can't reason with any of these things, it can't even reason within it's context window, all that means, those models with reasoning is summation and prediction of previously gathered prompts whether acquiring them during a search or from a user, it's all the same text prediction ran through that black box. There's no button in the black box that says turn this dial this much for x effect, it's just an input a transformation and an output tuned through machine learning with no label. This is the most overblown issue with so much wasted focus, it's intentional, chatgpt is supposed to seem unbounded, that's hard coded in during training, all the patchwork fixes are trained up layers stapled onto end that change the prompt almost like setting a flag and generate a new prediction from the beginning given the new prompt. LLMs won't get significantly better, we're not progressing towards actual intelligence or an agi, we're just using a tool that knows exactly what to say when prompted in an incredibly vast space but it's also surprisingly shallow. Stapled on at the end of every chatgpt response is the "I can do x which is really interesting, would you like me to" this is intentional to make you always feel like you're barely scraping the surface and most times chatgpt isn't taken up on the offer but most of the time it can't do whatever it generated as an option, I'm talking about the surprising things it offers that are mostly irrelevant but they look good as if it were hinting at it's capabilities. Chatgpt can't even solve a crossword, if you actually know how the system works you can see how everything it does is so surface level.
Edit 2: It's worse than I thought, this guy is a well regarded philosopher and focuses on ethics yet despite this being squarely a subject of his focus, he doesn't understand llms at all. We have no ai, we have machine learning and a product/tool which are llms and he doesn't understand either. We're no closer to AI today than we were 20 years ago.
youtube
AI Governance
2025-11-28T01:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzZblcgjAb1AaLBeJ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzz1lfJ3zugezEXoVB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzbsDPLOEbf5YtkljN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwEoHHNnAgDXD3_z6Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzDkg8qwkFQkjS8H1R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyy-7My-c_Zp6jW1_l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuKRLaJrQaKuphkq54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBOzAXEfdy-q3RjhN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxeBw_ZcH6RS6g0vep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxke7RX_YdtJ-o5Mjh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}
]