Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am playing devil's advocate and taking this on from a different angle: When it comes to AI, a lot of the fear comes from people who’ve never needed it. They’re speaking from an able-bodied, socially comfortable bubble. For people with disabilities, AI could mean independence. For people who’ve been bullied or harmed (even for regular social or daily interactions with the disgruntled employee), it can be the first interaction that feels safe. I’m raising this angle because I’ve seen and lived through how cruel people can be. I’m not claiming AI is some perfect solution, but a lot of us are just exhausted. Human emotions and impulsive behavior can do real damage, and that’s part of why some people prefer interacting with AI. The problem isn’t only AI...it’s the lack of empathy from those who’ve never had to depend on anything but themselves. Yes, we should protect ourselves and set real safety standards, but dismissing AI outright ignores the people who benefit from it the most. And I would like to emphasize a point another commenter makes: It’s wild how people assume that without work-for-survival, humans would just collapse into boredom. If anything, the absence of constant struggle creates space. It gives freedom to rest, create, explore, and actually show up for our communities. We’d have more capacity to build, invent, support each other, not less. Most of us are exhausted and tired of the daily grind. We are so much more than just our job. And some of us haven't been allowed to tap into our potential due to economic or financial constraints.
youtube AI Governance 2025-12-10T16:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxOJV2tBuwHtVW04ft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"frustration"}, {"id":"ytc_UgwBO7nHKh19jrnHqTZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxUWAa0kQDBr4n_r_94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx-j8oIbBt7y6qbyol4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwHM7-46f0mdspqQ994AaABAg","responsibility":"investor","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxlocDupn-kUK_ZRhd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw2Sks2eioeIZvULnB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzGDer4ewj9rBn5hj54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx2GcL9O8E5IytzG9h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyX6rXQCcRBysgEW-Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]