Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing is the obvious okay. We're not going to have any jobs if we don't have any jobs and there's there's not going to be any money for anyone to buy anything with then that makes everything pointless anyway and the significance of what money is the value of it won't be there. Then there's this paradox right? Whether AI is malevolent or it's a super a Superman right? It's still going to take humanity out. The only thing that all humans share from the beginning till now is the fight for survival. Now the conditions of those fights have changed but the goal is still there. That's going to be gone with AI. So whether I AI takes everything from you that takes you out or it gives you everything you ever wanted. Either way humanity is done. There's never going to be a good scenario in this situation. Like you said, governments are racing to build super intelligent agent that's going to just take them out of control. They're no longer going to be in control so it doesn't matter who has it sound like they're going to have a say over how this thing, what this thing does or how or how it acts. They're essentially building God and I'm not religious at all but I believe everyone that's going to be going through this will be praying to a god or to one they hope is real. Cuz that's going to be the only thing that could ever take out a god is another God. That's the realness of it. That's what's going to happen. Either way humanity is finished.. and maybe this is a simulation and the simulator is simulating on how to build AI safely or how to beat AI. But maybe that's what this is. It doesn't seem like it works out in our favor at any point
youtube AI Governance 2025-09-14T03:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugylb9nQwBDUyUsoEfR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGVWbRCPV5pLcQLCh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyhUx98EEnKWtPIg1p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZW2iUmpLMVIv8hWZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxFVH29PplrH1TnW554AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzAxnC59dVZaxtvynZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgypaiNF7ClQVNYpOqV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwa7IsdI9SD9DO_AHR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZhnjcn_Xrjt5bGjx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz3pbDCfPaY1yQyePl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]