Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So you spent years learning to code properly — understanding architecture patterns, knowing when to use Spring Actuator, recognizing idiomatic solutions — but you gave AI coding one weekend with generic prompts and concluded it's "worse than you thought"? You didn't use a spec. You didn't define architecture upfront. You vibed your way through with "a very generic prompt explaining in short the main features" and then acted surprised when the output needed steering. The irony is you proved AI coding works: you built a functional app in 20 hours that you admit you couldn't have shipped otherwise. Your actual complaint is that AI doesn't automatically possess your years of accumulated taste and judgment. Correct — that's why you're still in the loop. There are actual methodologies for this (BMAD, Speckit, detailed PRDs before prompting), but you skipped all of that and treated it like a slot machine. Imagine reviewing "learning to code" by opening VS Code with zero prep and saying "I typed some stuff and it didn't work, coding is worse than I thought." The title should be "I tried vibe coding without preparation and had to do some work" — which is a lot less clickable, I guess.
youtube AI Jobs 2026-01-28T19:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxFRZoTv9S4WMNC0qx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwf3YI1gQ5M9-RrMR94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxU7Lmh8a6Cr51-67R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2OqO_EBC_Wv-1dgl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxUgOZnQgVMm3zJog54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFew5jk3OQBdiwo1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwQH6jFycSJ5B_y3l14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzz43cUpyI-1vpYWgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1mPQInigOxtKyetN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzwZUuRhFDCyVXid5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]