Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Exactly. Going all-in after only seeing results from the minimum viable product and extrapolating everything can be replaced is proving to create more work cleaning it up (the "told ya so" is going to be at the same level as Hillary and Kamala). And who wants to do that? At my workplace we have access to several LLM's, and it's proving to be more work and effort trying to prompt it to correct results than actually worth it having it even try. And these aren't even difficult tasks. As mentioned in the segment, you're going into coding debt which will take so much more time, effort, and resources to fix. BTW, I'd stop using Apple (repeatedly) in these clips unless it's actually relevant -- they are not laying off AI staff left and right compared to other companies. They may be losing talent to other companies for AI work, but no doubt those employees will find it miserable without infrastructure, OR, they'll be laid off when this bubble is realized and bursts. This video will age well. As much as there's the promise AI can get better (just a few years ago AI images were filled with 6-7 fingered hands but now is pretty indistinguishable from real photos) but for the time being if companies are letting go of people on a promise rather than actual results, they're going to end up paying more in the long term. Not fighting layoffs as every economic downturn that happens, but doing it as a knee-jerk reaction and then figuring out you actually do need people is short-sighted and poor planning.
youtube AI Jobs 2026-03-01T18:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyKQKHFEL9IlsxzGTB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx6KAb87RmvBrSXtOZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxl___MLL2lEgg3a9h4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxKECb0plmeJHtcoT54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyEVdZNMwVZ1HQ-_K14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxydbKeIfSultKZm7V4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxYeKqUrELoUtKNBQN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzoH_jl1joAP58EEzJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgybKAfslzP5XMcKeXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx9i-7h1eSql-LNAbR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]