Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is like super micro managy manager on steroids, literally watching you your entire shift and you are even super micro managing yourself. There are like thousands of opportunities to mess up in a day. What Amazon wants are robotic humans who can follow every single rule and regulation, the laws, do it fast, legally and consistent. And if you do everything perfect, you get rewarded with a bigger load and more delivery stops. So you have to be the goldilocks of drivers. Don't be the fastest one out there, don't be the slowest. Don't break any of the rules and law and regulations. Do a good job, every shift, all the time. Whole nother level of mental stress, not to mention all the AI stuff towards the end. Wild. I feel this type of job needs to be a rotation, like doing 1 week on and then 1 week off doing something else or you will literally lose your mind. Doing this for a whole year straight is diabolical . Its Inhumane, unethical, dystopian, cold, ruthless, heartless, zero trust, zero freedom, unreasonable expectation, and frankly should be illegal. And they compare you with everyone else & rank them with all the infractions and score so everyone can see. That's brutal and dehumanizing, disheartening, creates tensions between co-wrokers and bosses, fosters unhealthy competition within the workplace, distaste & resentment for the company. I dunno how anyone can be happy after doing this for a week. Is it just me or do everyone think the same?
youtube AI Harm Incident 2025-06-19T04:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzMGq3mVLwrvdwa6tx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzOOsPqk5zhsrMW9ld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxNTWFflV5chfRqyFR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgylIOSg6_RpBQx9YhN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwN0-e0a9fILAHKTrZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwPSZYtannOWRQHDiZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzf4ldP0KqSdf7U2ol4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwKex9t1iPVEvV5GD54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwgK00DvJUo_YekEj94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxraHTcuDursmLJBTd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]