Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The important thing to understand is that AI doesn't (yet?) have good taste for code. As a result, it learns any code it finds on the internet as if it was written by Donald Knuth himself. Every security vulnerability in example code, programming course homework or distributed PoC code will be trained as an example of "industry standard code". I think that modern AIs are already good enough that if AI vendors simply spent a LOT of computing time to weed through all their training material to annotate potential issues in the training material and re-trained the AI from scratch using the annotated training material, the resulting AI would work better. It still wouldn't be perfect (because annotation would have been made by non-perfect AI without human supervision) but it would be much better than the AI we currently have. For example, if you give 200 lines of code to ChatGPT 5.4 Thinking model with a prompt "Can you find real or potential security issues in this code?", it can typically answer correctly after thinking about it for about 5 minutes. However, even AI companies are not rich enough to burn similar amount of thinking for every 200 line of code fragment they have in their training data!
youtube 2026-03-18T09:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzbzK1DfXhQUf8-cdZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzo8DfMGMYYW5VuhoB4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyOFpPiMwrf_0S1GUV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx9G6cXRXMF80OXK654AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxV6X73Y87oFBV4kr14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgymsN2JMxawandVGqR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwkKpWb70Re1cZpkM94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugzv4WtkaaKpUS2wJDF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxK2q4-Sm4ki2wN2T94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxyYRd7yOPVmmEvkYV4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"} ]