Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
6:55, question its decisions to understand why it did what it did. One more step missing, verify every explanation it gives because it will confidently make up stuff that is not true to reason about its decisions. Just like he said, treat it like a junior developer and doubt what it does. Which then leads to this contradiction that demos like this seem not to talk about. Why start the workflow with prompts for giving AI the requirements or acceptance criteria and letting it come up with a design for you. Anything worth planning or designing would need a senior dev to do it, before handing it to the junior to carry out. One prime example of why I can't trust AI reasoning goes like this: Dev: Why did you do it that way? AI: Great question! I did it this way because abc. Dev: What about this other way? AI: You are absolutely right! I should have done it this other way... Dev: Well, what made you choose the original way? Which way is better? AI: I chose the original way because of abc, but since you've mentioned this other way, this other way is better because of xyz. Dev: But this other way doesn't work in this specific scenario. AI: You are absolutely right! This is why I find that I'm most effective with vibe coding when I am come up with the plan myself before giving the AI detailed steps/tasks, so that it only handles grunt work and not actual thinking. I can't trust its thought process because I still have no idea if it's just using pattern recognition from existing bad code in the codebase or taking code examples out of context from online, and I often find these loopholes in logic too late and have wasted a bunch of time going down the wrong path by then. Anything "agentic workflows" that involves multiple rounds of agents thinking, planning and iterating is just too much of a black box with not enough involvement by a dev. Does anyone have any success with using agentic workflows in the real world to produce maintainable code?
youtube AI Jobs 2025-11-23T21:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugy2tXcQKFgOHnizOyx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyKv9d5snIAz0SYPkt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyrEZaDPmwS5SsgCQJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw6uX2r8MtjGpcy9nR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwNr8tvkHlpVZXn03F4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyo3_IvbgfWwc7lhXJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugz2Cbf-Dy-jlVucwix4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugyf6y-V1GMuNwHIoTh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwald57FcPm3Wo2ln54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxYCF2Ed_bICzeOtLJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}]