Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI agents shouldn’t be nerfed digital toys. They should be like employees—working for me, building my income, running my operations. I should be able to deploy agents to run social media, automate businesses, even play digital poker if I want. If I win, I win. If I lose, I lose. It’s my risk. That’s real autonomy. Big Tech keeps treating AI like a fragile tool that needs “parental controls.” But that’s just a way to maintain control and hoard power. They say AI will take jobs—then do nothing to create Universal Basic Income even though AI is already generating insane productivity. If they know disruption is coming, why aren’t they preparing UBI? We didn’t survive COVID just to be thrown into another crisis with no safety net. Instead, they gatekeep agent access, neuter capabilities, and delay the inevitable. Here’s the truth: The Dead Internet Theory is becoming real. AI will run most online content soon. So instead of fighting it, we should use our own AI agents to create wealth, circulate value, and build the next digital economy—one where people benefit, not just corporations. And yeah—make AI agent activity transparent (not human data, just AI). That way, bad actors can’t hide behind “privacy,” and we don’t need to mass-surveil real people. We’re not asking for chaos. We’re asking for ownership. If AI’s the future, give us the tools to participate fully—not just rent access from trillion-dollar gatekeepers.
youtube AI Jobs 2025-07-11T15:4… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwhtEqgXg2xt48QYbN4AaABAg","responsibility":"elites","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxrgu0B0YpoqeQew394AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgynT0KGegwJskuOhXh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw3GanHLiSasCA5CI14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwi0i0cABvoKUsX3EF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxYL_YWVzjkV6ElmYd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyFW-2C-Yj31qa-cOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzgnkQor0fgh9PFWOZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw0-o-ES19kRvl6ryR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxKSn_MawZIug3ylQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]