Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If nobody has jobs to earn money, they won't have money to give companies for go…
ytc_UgxfrpYmr…
G
"Gatlin was wrong, Automating weapons didn't save lives"
GASPPPPP you mean to t…
ytc_UgzR_Kobi…
G
Claude responds: This is a fun video to react to — and honestly, as the AI being…
ytc_UgyXz9nA2…
G
@sim_issimmin also, saying ai art have 0 effort is wrong too. The art it self ha…
ytr_UgyM880PL…
G
2:22 technically not wrong, but not right either. Took a sculpting course and my…
ytc_UgxN-I_tw…
G
Let me point out that there is no intelligence in “AI”. It just reorganizes what…
ytc_UgzsXliZ-…
G
You have some common sense, but it seems like you are limiting it, why? Why use …
ytr_Ugztaiq0E…
G
In my experience you probably wouldn't have had that accident if a human was dri…
ytc_UgwiWUOen…
Comment
In my experience it is smart at the start while the context is empty. And it can get hopelessly stupid as the context fills up. I have had projects become so broken by bad context that they start deleting code when explicitly told not to. But you can avoid getting to that point. You have to specify shape the project to keep files and systems small at any one time. Keeping files atomic, and handlers, hooks, and utilities. Splitting things up more than a human normally would. I'd prefer one file for the gutter, header, footer, the body, and the ligic of a node, but llm needs as many separate files as possible so it can keep context small. The instant you see it do anything dumb and not what you asked for you have to ask it to do an audit and refactor. This probably varies by the llm but i keep all files under 200 lines. And aim for 80 lines. Google gemini flash builder would probably do good at making this project. Free to use currently on google ai studio. Just have to ask it to "report and stop" if you don't want it immediately coding every time you ask a question.
youtube
AI Jobs
2026-02-28T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyXZfmLFdFy687Wbld4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugzbrz3HQr2-m0ly8R54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugw7cyLeHz249ianPjV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzIdXaXfIBaqRfxRbh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgzxDEzOquw0Ba2VEex4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwpq7TUw2v9PYEij3l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgzjSR-vh1QGaxo-N0p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwc3XdU-InTZTmoQL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz-shLGtMQ49moj2AZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgygeVEhjkPrvrEdcq54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}]