Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The first time I tried to do something not totally trivial with AI coding, it produced code that sort of worked. That is it passed all the simple tests. It was a regex applied to text input, outputting matching lines. It worked up to a point, but when I began feeding it random data as a regex it crashed without grace. I printed it to fix that, and it did. Then I fed it random data as the regex and just looked at the return value until I had about 100 randomly generated valid regexs, I created about 100 random 1gb files and had each of them parsed using each of the random valid regexs. I got a crash. I loaded it in the debugger and it was crashing way down in some library it had pulled it with God only know what providence. Couldn't find anything about it on Google except the link to the repo, and that was the only thing in the repo. No sane programmer would have used that repo! But when I started trying to find out how the code got there, I was several layers of sketchy repos if code before it got the the code the AI wrote for me. It's like discovering that your car stalls when the left turn signal is on, but not lit when the right door it opened, then finding out that the path goes through the radio, and an undocumented module that doesn't even show up one the dealers parts diagram or manuals that is mounted under the spare tire before hitting the engine computer on a pin labeled no connection. When you disconnect the wire, it prevents the stall, but causes the reverse light to be on all the time! Here's you spaghetti code, now give me back my bowl!
youtube AI Jobs 2026-02-05T23:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy5lIerTiqFEn-kqK94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzADIYHFjgImEe-odJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugyu1bYgWL8fI1Suk6p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz0rI5MRrU7NnD7GPx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwgpFVuwmv0C9bIubx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwA23ywMkcMGq7P-5J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyDTE6yIrM4_5pLnrV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxqfZ-a8rhWZEXhzzh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugwl94HGtNpdwC9lNi54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwltKfvJ2WL7ru4YMN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]