Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm a dev in a company whose product is tied to AI/LLMs. The truth here is neither extreme (i.e. we need to forget about this AI stuff and go back to human code, or it is time to start replacing humans). The truth is right now the ideal model here is AI serving as a human companion to make that human better and more productive. Yes there are catches, Claude needs to be coached, but humans become designers, architects, and project managers a little more rather than trying to get that 1 line of code just right. This approach definitely seems like the sweet spot where things have landed. I don't see a future where AI replaces humans for complex systems because complex systems are built for human consumption and human accountability, so a human will always need to be involved in some way. If the public impression was otherwise, or some CEOs made poor decisions based on a bad reading of what this means, their criticism is deserved, but that does not mean that the AI itself is trash without application. It is absolutely useful when used well. It can outperform humans. This also requires understanding code. You can't really be that mentor and coach with a naive understanding of your code and what you are building.
youtube AI Jobs 2026-02-20T18:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgyuPk2wAq50-ipbRXR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyIiStYS7fA8HPXDSh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyWBpXLkqiIwbbCYf54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxh5xd-7ur6cB-0hlt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwFa3Ln_r9nMxtL6TF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxlKrT8w_4pTmriAtF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxULFdpVxY1PpO5Egd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz4yOuuRAij8zbickJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzuaVc0lLmlE38xz-t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgznPciN2geRTwpsJF14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]