Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The issue at heart is accountability and liability for when AI doesn't deliver expected returns. If companies implement AI for as many jobs as possible, then in theory (and pushed to its limits), we could just have the CEO oversee the AI VPs that check on AI Directors, that manage AI managers, and so on (or just one AI that oversees everything, if you prefer). However, this creates the issue about having the CEO be the one responsible -- the one be liable for everything. Someone has to be accountable when the AI doesn't deliver as promised. But if you keep the VP of Operations, then that's a buffer. The VP of Ops is interested in having a buffer too, so they create or keep the deputy director of Ops. And the same thing for every other executive area. Boards will also agree that this is necessary as the cost of replacing VPs, deputy directors, and maybe down to managers, by holding them accountable, is higher than the cost of replacing a supervisor or analyst who oversees the AI. So the question is, are you in a position that supervises and responds for the results of an AI solution? If you are, and depending on how complex the solution is, then that will determine how likely you are to remain in that position. My two cents.
youtube AI Jobs 2026-02-24T20:2… ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwBS7UdMtu0yICkqNJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwevxEc64EA9CXhd1Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyiqhWhsSfCMAJtNp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-yo_rkJq9euG3jAR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugylpb8auxiwfYGoYH94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwt-0ZzjBoXqq3N2BN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxLRScyDbmWmXkseAx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyOlBcXwkQb0rd7nwh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyTHoM1Twk1x1qqxfF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwETB4fe_wqGfMrY114AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]