Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a programmer who works with AI on AI, I will say I don't care what people say. AI will never really beat a really good programmer unless it's developed to the point where it can beat a human at anything. Why you ask? Well AIs are trained with sample of data, the code that humans wrote and out of all code out there like 30% is good code, and 70% is well "meh" code and it'll only get worse as people use more AI to code. So just by stats, 70% of the time your code will be bad, and most companies cannot risk it. Here is an example of a similar issue, so the other day I gave an AI service to generate a presentation for me. I looked at it but it was...too bloated, no animations, transitions etc. After trying to get it to fix those, I spent an hour and gave up and started making my presentation by hand. Guess what, the end result I had was so good I could sell it and the audience would also want to look at the presentations. I couldn't get the AI to get it right beyond the text and the pictures. Same with programming, will it give you a general template for a program? Yes but will that be different than any other similar program out there? No and even the program will not be built if it's fully left to AI, at least for a few more years.
youtube AI Jobs 2025-10-09T02:5… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugwk_xyof-7KmaWS0iJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwC9pWTHWe-dqnBYER4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRloeiPv5Mu0HIgYV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxcpy2l9R-YfcEWAoF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwyvZ3MPQWuFXJJryl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyHv5KuP_aKRJ9ZuFp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxktY6TtCqWxx5Scv94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwwQxbjGyjk_pWlwjx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwzbJsOAMKo9ue0w3Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyZGhdk4RKuC4-BV_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]