Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think you are really spreading fear instead of awareness. I have been a software developer who uses and has developed AI applications when we still called it ML. Did I notice a change in my job? Yeah, sure, but what did you expect if you throw a trillion dollars at a problem? And to go even a step further, is GPT-5 that much better than GPT-3? In my opinion, I expected more in the timespan of five years. I do think that most of the gains were made on the integration side of things: you use it in an IDE; we added agentic capabilities, which kind of is the same as throwing more compute at it. And now, to conclude: my job did change. I write less boilerplate, and it is easier to compare huge log files. I use it to go back and forth with ideas, use it as a sparring partner, and it is also a very nice search machine for a huge code base. But it makes so many mistakes when it comes to real problem-solving. If all you do is write boilerplate, are you then really creating software, or are you a monkey that copies some code and changes some values for this new case or parameter? Background: C and modern C++ in a low-latency video pipeline. I also like Rust.
youtube AI Governance 2026-03-31T18:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy5pIE9HkxIa9ZmiMF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-4bRS7gwuwvt71rN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxb0_zUJWZArqEHmk14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzGxb_8OGjsH-aYlt14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxp7QcDkJBGHOZPqdl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxGmw8VpbKIoVmOp8B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwfIaMuNJqiUwpExPN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFGoe-ilpvLxtBf4F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy_1aPnZBDqHUani4h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyLQooXPCOtULsyrSx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]