Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’ve found the best AI’s (sonnet 4.5 etc) is actually useful and doesn’t just churn out stuff that isn’t thought through and will cause many issues later. However, this is only because I give it a very strict set of requests that I only know from being a dev for 25+ years. I make sure it only handles a small function at a time and that function is testable. If you give it too much freedom then it makes mistakes. The problem is that the mistakes don’t look like mistakes as the syntax is correct and it mostly runs; the mistakes are things that it doesn’t know even from a massive context; eg: this with system will fail if x,y or z happens. But as it passes tests currently, the AI confidently reports “finished”. I still think of it as a very eager young geek - knows it stuff but is miles too confident and makes mistakes as it is rushing just like an overly confident junior. Some day this may change but I don’t think the current LLM approach will solve this fully unless it is very tuned to “think” properly.
youtube AI Jobs 2025-12-29T10:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyShVCiFohFbAuxHbF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyGKKhCNWKpWmE0QwJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxWlY8wKwlC3IVD67p4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzjN5oB7JY8AksUHH94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy5UMPycm4SjvWi_mZ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5HiA2Fy0j6lNiMAd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwGUE3QM4188oI13jB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz417YVrvH0_gMyDE94AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz3dMe7Z1hyMM1JIEx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxpF70ghTL3Lsr47EV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]