Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am so sick of all of you fucks pretending total emotional detachment is the sa…
ytc_Ugxbcfv77…
G
Your ai is as smart as the monkey in your family tree. You say I come from monke…
ytc_UgzeV2KOX…
G
Dude tries so hard not to mention the fact that those AI data centres must have …
ytc_UgwPysZf7…
G
This is the absolute truth. The companies spearheading generative ai don't care …
ytc_Ugy5mSoek…
G
Illuminating and scary, but practical and inspiring towards the end of the speec…
ytc_UgwNcervM…
G
Probably going to be the poles [melting faster than expected](https://www.nation…
rdc_gkqfb9s
G
Well, if you let AI paint Starry Night and sell it for millions of dollars, that…
ytc_UgwSmwNfs…
G
ai will scrape and train its own work on itself. it will progressivly get worse.…
ytc_UgxD9HQnF…
Comment
I think you’re looking at agentic coding the wrong way.
LLMs are basically statistical machines, they give you the average answer. If you just let them run, you don’t get great software, you get very confident mediocrity.
What I see senior engineers doing is not handing over architecture or the coding loop. They stay in control and use AI as an augmentation, not a replacement.
The workflow is honestly the same one we’ve always used:
plan → execute → test → commit → repeat
You still need to understand what you’re building and why. The AI just helps you move faster inside that loop.
If you skip the planning (and I don't meant to give a generic plan like, use kotlin on the backend) and testing parts,
you’re not doing agentic coding, you’re just vibe coding with extra steps.
Used this way, AI keep the quality bar instead of lowering it.
youtube
AI Jobs
2026-01-20T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz4agrWFYi3pnO77vZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzcJBGvtMXN48bdZSh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz-gSe5oYsB0fe3Nf54AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzb1Q7ao1ocSruRgkt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxM4r7YuSrV2DSRzrh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw8pfKO9_5HjYtf6td4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwxDfXybXSTcXq1uJ14AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFFzuza5KTaAiGxwJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzZFczRqilIEzDsc94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcxCCni-sSFF_v27N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]