Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
From the perspective of a hiring manager that's had AI foisted on them to "augment" - the AI makes more mistakes than even the day 1 grads I've hired in years gone by. It's less able to articulate errors so I spend more time checking its work. The graduate can at least tell me how they got an answer in a way that can be corrected easily. I don't need staff that are good at prompting the way Mr seller of AI platform thinks I do, I need staff that understand why the prompt is written that way. Why I need them to find that piece of info. What are we trying to prove? How do we teach junior staff when the experienced gained in what feel like grunt tasks is gone? Human experience is based on the things we learned before and being able to apply them. All we're going to end up with is a series of blow-ups in 10 years when there's no experienced staff coming up behind to take over. Madness.
youtube AI Jobs 2025-11-24T12:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwRLjx7CBXAY1wr_P54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyLbFMMVmyaITCaaL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwLkqJ0_q0ZDaL2F7l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzlwmPjPuuHrK_Vp-h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw00QT0knZUPNsFcJd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"} ]