Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's a little bit too late for any kind of action to be impactful in preventing the destructive effects of AI. In the same sense as nuclear weapons, militarized countries are all racing to harness the best versions of AI, mostly with the intent to weaponize it as a defense measure. Ironically, we'd rather have an all out nuclear war than reach singularity with AI. We all instantly die in a nuclear holocaust. With ASI, its going to be a slow and excruciating death and were there to witness it. Children starving to death, save crimes and what not. This time around there is no slithering our way out of this. Everyones just wants to get a piece of the AI pie. First world, and democratic countries might set moral and ethical boundaries for AI, but for the rest of world they'll be having like jailbroken versions of chatgpt. Most of us in democracies only get to see our own cultures.... but it's like a total paradigm shift when you immerse yourself in another country with cultures such as Slavics, Balkans, Africans, Chinese, Middle Eastern etc. Some of them seem like they are still living in the middle ages, some are morally corrupt to the core, some are very materialistic and greedy, and some are just crazy psycoppathic and suicidal like nothing else matters and they act like hypnotized guinea pigs. AI in the hands of any of these groups of people pose a danger far worse than a nuclear bomb.
youtube AI Jobs 2025-10-11T05:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyq6YNOjMdjbA3MpUt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxf6eqsN5i-TGC_kG14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzAaWa26hJUg-YHLP14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzXKXC4vRCLFrkltF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyDHsuh-IkxkuzhjgJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyqbM_NhySmCg2h9pJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzydTyAXYrXys5wBBh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxU19X2-eQxH3Yq8mJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz2WxrYO2slE4C3CS94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyTwsyQ8Ajfqai56Vx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]