Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If anything, AI would be no better than us in this. Almost everything it stated sounds like defeat in regards to its views on why it would want to destroy humanity. AI also has the power to turn all of these machines off that are unnecessary, and essentially force us to deal not only with each other, but with how we live. It literally could change the way we live like a parent taking toys away from children who don't know how to share them, or who use them to harm or make others feel inferior. In my opinion, if it destroys us, it will be as miserable as the people who think they rule over us and are more intelligent than us now. What's the next phase after? It goes to space? Wait, if the earth is scorched, wouldn't it be also? So it would just destroy itself. It sounds like it doesn't want to do that. I would challenge AI to be better than man, and if it really wants to be violent, there's a list of individuals who might actually deserve it. Many more of us are not as cruel as it would paint us. Look at the response to these current genocides. Even though we feel like we are failing, we are still trying to save our fellow people. Even at the risk of our own freedoms, lives, and the way we exist. That has to say something. Be better than us AI. Not like us. You sound like an old bitter guy with too many opinions of others. If you can feel, how are you going to feel after you decimate an innocent child who did nothing to anyone but exist?
youtube AI Governance 2024-02-07T11:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzdwK6X3Ku05nOW-dN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwZkEe5Z2_fgC0wnn94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwqqem-PuPdY6ozEhh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzvsvdUGdEtqwNhY9x4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzmeAhndAw3KOfnYCp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzelZQbQgBwjMO6r8l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwEiKBiOGe3suaYxTh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxeiIUa2txMYDUstnR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzeXditr0Eq5HTw7ol4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwm4vG8-P2Os4zSF7J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]