Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Speaking as an older human that is also a "fresher" who just got hired, I'm extremely glad I learned to code pre-AI. Knowing how to work with AI will be increasingly important as time goes on, and I _do_ use it regularly, but more then half the time there are serious problems I have to fix in the code, regardless of which model I am using. I also do not think we are ever going to get to a point where code is fully blackboxed and changes go from prompt to production without a human reading the actual output. People still read and write native assembly despite having absurdly high level languages like Python, because there's still a need for people that understand direct memory management for extremely critical use cases. Also, without humans that understand the code, there is no way to prevent sabotage or model poisoning. Do people really think malware authors are not going to figure out ways to get exploits into LLMs that are trying to indiscriminately eat as much data as it can find in an ever more desperate attempt to make progress? Does anyone actually think that, long term, hospitals and governments are going to be comfortable trusting that AI generated code did not generate back doors and vulnerability? And when they inevitably do, who's the one going to be held responsible for that happening? Long term, I feel like those of us that can actually read and write code are going to end up being more valuable, not less. Assembly programmers are a lot less common these days, because people stopped learning ASM, but the ones who _did_ are basically irreplaceable at this point. Many juniors and coders are going to switch to "vibe coding" and they will generate mountains of low quality code at a rate hitherto unimaginable. There might be less engineers overall, but with the sheer volume of code being written, those of us that _can_ actually review code for problems will basically never run out of work.
youtube 2025-03-13T19:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzNEcL9nLG9wUcL8Vx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzp69FyuWgZ6TWWBBF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxNRGpvkc5hL6hXOFx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_WELjyGfWGSQorUd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwuapijF3sBjw1Pg_J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzknS6RCrCzr8BdF_p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxpf0UZtAUzA1Ygosx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxnToFRAzN44s3I2Ll4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmAcKNyOPz2TRQQ3p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzCXGueoECINx246l94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]