Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1. Tesla's can't drive anywhere on their own.
2. LIDAR can be used outside of a…
ytr_UgzDKklm5…
G
U can't cheat God the Creator of all 😅😅.... AI will always fail man but natural …
ytc_UgwiZO8g5…
G
Pls anyone human ai can explain this video in few simple words I don't have time…
ytc_UgwiTv3cu…
G
This guy is dangerous because he has a drive to achieving AGI, which in itself p…
ytc_UgyA-QooX…
G
is there a place that does not in some way take advantage of what is close to or…
rdc_d3rgznu
G
I think this really might be the most exciting thing about AI for me. In researc…
ytr_UgzkQRQ68…
G
Honestly, AI brings more coding jobs. I really think of building an empire of fi…
ytc_UgwZGocje…
G
You had me until net neutrality. What exactly came of that? Did they stop you fr…
rdc_jkgt4yy
Comment
So you spent years learning to code properly — understanding architecture patterns, knowing when to use Spring Actuator, recognizing idiomatic solutions — but you gave AI coding one weekend with generic prompts and concluded it's "worse than you thought"?
You didn't use a spec. You didn't define architecture upfront. You vibed your way through with "a very generic prompt explaining in short the main features" and then acted surprised when the output needed steering.
The irony is you proved AI coding works: you built a functional app in 20 hours that you admit you couldn't have shipped otherwise. Your actual complaint is that AI doesn't automatically possess your years of accumulated taste and judgment. Correct — that's why you're still in the loop.
There are actual methodologies for this (BMAD, Speckit, detailed PRDs before prompting), but you skipped all of that and treated it like a slot machine. Imagine reviewing "learning to code" by opening VS Code with zero prep and saying "I typed some stuff and it didn't work, coding is worse than I thought."
The title should be "I tried vibe coding without preparation and had to do some work" — which is a lot less clickable, I guess.
youtube
AI Jobs
2026-01-28T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxFRZoTv9S4WMNC0qx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwf3YI1gQ5M9-RrMR94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxU7Lmh8a6Cr51-67R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2OqO_EBC_Wv-1dgl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxUgOZnQgVMm3zJog54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFew5jk3OQBdiwo1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQH6jFycSJ5B_y3l14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzz43cUpyI-1vpYWgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1mPQInigOxtKyetN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzwZUuRhFDCyVXid5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]