Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"I am offering you a product that will safe you from all those other dangerous products. (That I have built myself). Trust me those other products are bad, my new product is good. Trust in my new product (while I still sell the old products)." Or more specific: "AI is dangerous, but don’t worry—I, one of its architects, will save you from the dangers I helped unleash. My new AI (ethically designed, of course) is the good kind. Trust me to develop it, let me tell you how to regulate it, even as I remain invested in its growth." This is a self-credentialing loop, where the same actors who built the problem claim unique authority to solve it, undermine democratic accountability and widen the gap between public interest and private power in AI governance. Hope people will see this "arsonist-firefighter" behavior: developing and promoting powerful tech with minimal constraint, then rebranding as a steward of ethical AI once the harms become undeniable.
youtube AI Responsibility 2025-06-10T02:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwhtN_yJAF9F-41QDt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxlr6FKv4mdspi1Ih54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw0s0rBU9vG8aL4t2Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTTfXEvMlhYURM-1R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgytxIZxAn8ti8PFkxh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx7D9mNnYBFGvvggPV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwHXEhK9HJ6oNKLpIN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxz9TVyW79Tig4XpEF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxz3TC52eJFT91r6xN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSEQ8cCEdicQK2PR14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]