Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As long as we, as developers, are responsible for understanding stakeholder needs and translating them into technical solutions, I struggle to see the value in relying on something as inherently ambiguous as natural language. If we're ultimately required to express precise technical requirements anyway, why not just write the code directly—where intent and behavior are unambiguous and executable? I'm not in the camp that denies AI's potential, nor do I buy into the whole singularity narrative. To me, the current AI hype feels massively overblown. Right now, AI is a powerful addition to our toolbox... nothing more, nothing less. Most of what we’re seeing is just marketing. Sure, the big tech players are throwing billions at it, but that’s more about not wanting to be left behind in the next big tech race. They can afford to place these bets. If it pays off, great for them (and possibly bad for the rest of us); if not, they’ll pivot to the next shiny object, just like they did with the metaverse, VR, AR, and so on. No one wants to be the next Nokia in terms of mobile phone market share
youtube 2025-05-23T08:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwSsAqOMzbCzLQT8OV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJGxfHE60KhBeoRKV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugznumg3TeNgF7TQXT54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz8bLkhMpSNkV8ViCl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJT-Msyep1Nyhbkf14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw6oqJ0yV9BbyiO6mJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxkRLjLyPtrFbsGrRV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx7aaZCFltqUuuLSGh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwypLWw6YQxrzLvEoN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxJOfzhpitFMZ8hiqV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]