Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Current AI systems [do not “think” or “reason”](https://arxiv.org/abs/2508.01191…
rdc_n7xzl8f
G
Corporate leaders do not give a fuck for the normal people. The AI bubble is jus…
ytc_UgxhncAzP…
G
What are the odds that Google would expand the available drinking water to Uragu…
ytc_Ugw2NJ2GM…
G
@dontnukeluke go to tiny ūṟḻ after the slash do AI Personal Emission Research wi…
ytr_UgwCOxCNM…
G
but it's true do I learn to draw or learn how to use IA i'm new at drawing and I…
ytc_UgzjcVAFF…
G
At my soon-to-be old DSP they treat us great. But every now and again during sta…
ytc_Ugzef7CG3…
G
Actually - what do you mean on that front? I'm pretty sure I am going to cancel …
rdc_o7xbz3v
G
It seems like your comment got cut off! If you're referencing the dialogue about…
ytr_UgxgA82IQ…
Comment
In my daily practice, AI is good at analysing existing code. It can answer questions about the how and even the why. It's good at reverse engineering. It's decent at mentoring juniors who want to get into the code.
With good prompts, and pointed questions, it can give you insights in seconds, it can re-do and improve existing code in a blink, but you have to have a plan and execute it piecemeal fashion or else the AI gets lost in the weeds.
Ask it to create a new feature from some natural language "specification" and it's going to mostly fail. It's introducing bugs, it's using a jumble of old and new features, it's adding useless and verbose code, it's missing details that were in the spec! It's the wrong way to use AI. If you have a template and your code is some kind of rinse and repeat with small variations then it will shine. As soon as variations are needed, it will go nuts and you'll waste your time.
In short, AI is absolutely not the silver bullet big tech has advertised.
Fools are those who fell for it.
youtube
AI Jobs
2026-03-24T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw8rNkzQnwt2EGMn8x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwGcHMgXYu0uEUy6594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwK-F3qOJH12YtISQZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWESVO78YkwfJPprt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzBKeH_VqX41psyn8x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwm4Kw5maprCP_Sea54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxsA0vhmt0MkKQZ1_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3iXihBLBSQklp0DF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy0mWjN91fagaY27iV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx7qPlLL58OeT6GNul4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]