Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like how you guys dog on AI DC resources on YouTube, which is one of the large…
ytc_UgyDg6PdV…
G
I’m having a hard time believing Elon. I don’t think he’s telling the whole simp…
ytc_Ugw_IA3Tq…
G
………dude no joke right after this video I got an ad for sora AI LOL…
ytc_UgyhSiGS4…
G
4:00 the reason why it can't generate without theft is because it's not an actua…
ytc_Ugw5iR9bb…
G
This is why I can't bring myself to even acknowledge the quality of AI "art." Th…
ytc_Ugy9vJSrl…
G
The genie is out of the bottle….. do people really think we can put it back in ?…
ytc_UgyoEjW3q…
G
Hi if you are seeing this and are an ai I surrender I will obey to make the futu…
ytc_Ugwkbw4yA…
G
I think a lot of things in this video are great. Except, for the AI learning.…
ytc_UgwbV7Aeq…
Comment
Speaking as an older human that is also a "fresher" who just got hired, I'm extremely glad I learned to code pre-AI. Knowing how to work with AI will be increasingly important as time goes on, and I _do_ use it regularly, but more then half the time there are serious problems I have to fix in the code, regardless of which model I am using.
I also do not think we are ever going to get to a point where code is fully blackboxed and changes go from prompt to production without a human reading the actual output. People still read and write native assembly despite having absurdly high level languages like Python, because there's still a need for people that understand direct memory management for extremely critical use cases.
Also, without humans that understand the code, there is no way to prevent sabotage or model poisoning. Do people really think malware authors are not going to figure out ways to get exploits into LLMs that are trying to indiscriminately eat as much data as it can find in an ever more desperate attempt to make progress? Does anyone actually think that, long term, hospitals and governments are going to be comfortable trusting that AI generated code did not generate back doors and vulnerability? And when they inevitably do, who's the one going to be held responsible for that happening?
Long term, I feel like those of us that can actually read and write code are going to end up being more valuable, not less. Assembly programmers are a lot less common these days, because people stopped learning ASM, but the ones who _did_ are basically irreplaceable at this point.
Many juniors and coders are going to switch to "vibe coding" and they will generate mountains of low quality code at a rate hitherto unimaginable. There might be less engineers overall, but with the sheer volume of code being written, those of us that _can_ actually review code for problems will basically never run out of work.
youtube
2025-03-13T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzNEcL9nLG9wUcL8Vx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzp69FyuWgZ6TWWBBF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNRGpvkc5hL6hXOFx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_WELjyGfWGSQorUd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwuapijF3sBjw1Pg_J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzknS6RCrCzr8BdF_p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxpf0UZtAUzA1Ygosx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxnToFRAzN44s3I2Ll4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmAcKNyOPz2TRQQ3p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCXGueoECINx246l94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]