Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a Designer when people tell me AI will replace me. I respond. Good design loo…
ytc_UgzN7wBoE…
G
This guy has no idea. We cant stop AI now. If China has it and we dont you have …
ytc_Ugy19f2ug…
G
The problem is if it eliminated all the jobs then the population won’t be able t…
ytc_Ugy1Lq8kq…
G
AI Revolution is here and has come to stay. The earlier we learn,adapt the bette…
ytc_Ugywvw3Gl…
G
Robots shouldn't exist they are very dangerous .. look at the movie I robot and …
ytc_UgxY0pITy…
G
Everyone should read the story of Erik Soelberg. "A troubled man, His chatbot an…
ytc_Ugy8mqSaV…
G
you should probably put an ai filter overlayed on the video so then people don't…
ytc_UgwUrXCsb…
G
Just talked to chatgpt, didnt make hard questions, and got simple answers. Prety…
ytc_UgzqEulGO…
Comment
Here's something interesting I noticed: Have an AI implement a feature for an app with really bad code and it will produce equally bad code. Have it implement a feature for an app with really good code and it will produce way better code. The code quality you get from AI depends a lot on the code quality your codebase already has!
I guess that's not surprising, since AI works predictively and always bases its predictions on the input it's fed. But it also means you can't get good code out of your AI unless you already have good code. And who would provide that initial good code? Not the AI! If it starts with nothing, its code quality will vary all over the place because there's no prediction base to work from.
That means you'll always need developers to bootstrap a project with some initial good code. Design good APIs, design good interfaces, design meaningful base classes, write some code to define the coding style, make decisions about which components exist, how they interact, how data is stored, and so on. Only then can you start using AI to fill in the gaps by giving it very detailed, small-scale code tasks. Write function headers but leave the implementation to the AI. Once functions are broken down into very small tasks and named in a way that the name alone gives away exactly how the function should work, the AI will be able to handle the boring task of just writing the code.
AI coding works if you use AI like you use compilers. You don't write CPU instructions by hand. You formalize how something should work and the compiler writes the CPU instructions for you in a way that matches what you formalized. You can use AI the same way: formalize exactly what a function should do and AI can write that function for you. AI is a tool, but saying AI can replace developers is like saying compilers can replace developers. A compiler cannot produce anything meaningful unless you feed it exactly the right instructions. The same goes for AI: it cannot produce meaningful code unless you tell it exactly what it must do.
And then you also need developers to check the code. Half the time, I can't use AI code the way it comes out. Only after reading the code, understanding it, and making small adjustments do I get the code I actually wanted. The code that came out of the AI was often inefficient, had bugs, frequently lacked meaningful handling for corner cases (which I didn't explicitly specify in the prompt but any human developer would have accounted for), was not optimally formatted, usually way too verbose, made incorrect assumptions, lacked meaningful comments but had plenty of unnecessary ones, was not broken down enough or was broken down too much, or just didn't truly work although it looked like it did but then failed in unit tests.
youtube
AI Jobs
2026-03-10T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzS8Wdn7SpBw_ZdXPl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgwUTOCjaUAEg0C0bf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugy6druwGj_n-Fd3jyp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugwn1Jhni-tPCMvn-A14AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgwJTLw7078y9s4ALop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugxp4om7BI_IIMrucXp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgyzYvpyElRbThptvot4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxTRpl4kmC01jD1QMR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugx3Kv0ADbNkDsT8FFZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxR1U39RLxGPASKUtl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]