Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For me personally I see this as a paradox, if you know what you are doing AI has little value, sure it can write a bunch of boiler plate or a bunch of test cases but it never does it the way I want. By the time I have got the prompt written to do what I want it, and accounting for the time it takes it to run every iteration I could have just written the changes myself. If you know what you are doing you are faster than the AI and if you don't know what you are doing AI is dangerous and likely not going to work out, sure you can do the thing where you are juggling a bunch of agents and getting them to do multiple tasks at the same time so you are technically doing a bunch of stuff in parallel, but that actively lowers the quality of the output you get as you are constantly context switching. I find AI is only good when you know what you are doing, but its also only beneficial when you don't know what you are doing. I call it the AI use case paradox
youtube AI Jobs 2026-04-15T14:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgxOa1Cw5bSJA8Mk0fZ4AaABAg.AVhR0bIzy9IAVkhePgWMua","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyNu06lincyJs_xFEp4AaABAg.AV00t86QC27AV04r8gFO-y","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgwX7qjzo1jevbrvHc14AaABAg.AUcRT4udkolAUe6MuUYlbD","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgzS8Wdn7SpBw_ZdXPl4AaABAg.AUACWNeZRX8AUB2P00ozDQ","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzS8Wdn7SpBw_ZdXPl4AaABAg.AUACWNeZRX8AUB7uirU0u6","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwJTLw7078y9s4ALop4AaABAg.AU9yBfFlv0EAUT3WkhxmHa","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgwJTLw7078y9s4ALop4AaABAg.AU9yBfFlv0EAVcQfwIUbYw","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyaqoduY_PgcST1Sx54AaABAg.ATIrfjkca_HAUBdUU-1Bwt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxCrxQyrVCkrgsnEMx4AaABAg.ATGg6s1reHNAU365slz6gh","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgwKIURSjBsqRHCjem14AaABAg.ATFgsE7q4eGATFhC7fzwdh","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]