Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To ALL the fcuktards who developed & perfected this vile technology despite being FULLY AWARE of the potential existential threat & danger it poses to humanity..... Thank you SO MUCH you bloated sacks of treachery. I hope you're SATISFIED with yourselves trading the future of humanity for your intellectual vanity and financial rewards. You just engineered the apocalypse. DON'T FORGET..... YOUR family & friends WILL be among the dead & victims of your technological legacy like everyone else. I wonder how THEY feel about that? And, about YOU, for condemning them to it? Congratulations. I hope you're PROUD of yourselves. And as for your meaningless, pointless warnings about the imminent apocalyptic threat to humanity YOU unleashed on the world, which WE are helpless to prevent, stop, or avoid..... Thanks for NOTHING. Because THAT'S what your concerned warnings are worth to us, now.... If YOUR OWN predictions about the horror YOU'RE responsible for become self-fulfilling prophecies. NOW you feel regret & anxiety about future generations??!!?!? GOOD. You DESERVE to. After YOU worked SO HARD with SUCH dedication on a project you KNEW could compromise their futures. Even your OWN family's lives didn't give you reason for a moment's pause to ponder the question, whether ALL the supposed rewards of AI technology is worth taking chances with what YOU claim is the greatest existential threat humanity faces. And which you NOW regret. The problem with You, and other "Eggheads" like Oppenheimer before you, is that your "profound" concern & regret ALWAYS FOLLOWS the hell you unleash, when it's nothing more than an antidote for YOUR sense of guilt & shame, too late to prevent it, or be of ANY use. You chose to TAKE the gamble, deciding the probable extinction of the human race was insufficient to deter you from feverishly pursuing your own selfish ambition and recognition for it, despite your apparent serious apprehension about the consequences. It's ironic that achieving your abbreviated place IN history as the "Godfather of AI" could prematurely END human history, making you briefly famous for BOTH. FANTASTIC. DON'T expect gratitude, congratulations, or praise. WE were NEVER asked or consulted whether WE thought it was a good idea for you to continue your research. Or, given the courtesy of opportunity to oppose, reject, protest, or prevent it. To decide whether, until further notice, it should've been halted. All research related to it confiscated from all those researching it, and securely locked away. ANY further work of ANY kind on AI technology STRICTLY prohibited by federal & international laws that COULD classify it as a "terrorist" offence, punishable by execution or life imprisonment without possibility of parole for anyone convicted of ignoring them. Laws which are STRICTLY policed & enforced. Until it can be determined whether to permanently BAN the technology, like Chemical & Biological Weapons and other unacceptable threats to humanity. IF a way CANNOT be found to neutralise the danger it represents. Which is what I would've chosen to do or supported, as the most prudent precautionary approach to something as potentially dangerous. IF I had the opportunity YOU and others responsible for AI got to decide the fate of 8 billion people, who never even knew you'd gambled their lives away before it was a done deal. NOW you presume to cynically, condescendingly express your concern to US about the approaching threat YOU created, when it's of NO USE to anyone. AND, impose your "regret" on us, as if we were obligated to give a damn. As if it reduced your burden of responsibility or guilt for ignoring or lacking common sense and better judgement, to obsessively pursue your own self-interest at the expense of ALL humanity. Something mentally challenged fools with limited education can be forgiven for, because they don't know any better. But which presumably wiser intellectuals capable of pioneering AI technology, have NO subsequent excuse for. I've heard ALL the claimed, promised, proposed, supposed "amazing" benefits of AI technology, and STILL can't understand HOW it could be justified or prioritised OVER the safety & survival of our species, and collective, continued existence. The evolution & advance of human capability, capacity, achievement, potential, technology, & industry has relentlessly increased exponentially since man first discovered how to make tools, or the Industrial Revolution. Barely a century ago men like Orville & Wilbur Wright pioneered human flight minutes in duration using primitive contraptions. Only five or six decades later we were flying Transcontinental, Stratospheric, Supersonic, Hypersonic, Exoatmospheric (in Space), and even Interplanetary (the Moon). With scarcely the time to fully comprehend & understand how far we've come, such is the pace of progress. Is a technology feared & prophesied by it's own creators as the imminent end of humanity, REALLY worth pursuing to increase the rate of human progress, that's already steaming ahead at a pace we can barely keep up with WITHOUT IT? Look how far the human species has evolved & developed in a relatively miniscule amount of time using ONLY our biological intelligence, WITHOUT any assistance, influence, or interference from Artificial Intelligence. The pre-AI process of human evolution & development will CONTINUE to increase exponentially at it's customary breakneck speed without AI. How is that NOT enough? Or, worth risking & causing our own extinction for the benefit of AI, obviously against our own best interests. Perhaps we're really NOT as intelligent as we assume.
youtube Cross-Cultural 2025-10-03T13:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyD_cksCD9nyEBrYmp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwf6swVvXL6BVTFVih4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwzehU41qFtGM5Byvl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugw4MGfmpvGX_GqgwUV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz1ikZKe8546_bpgKF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwh9O_mdjbjQ_RbhjJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"sadness"}, {"id":"ytc_Ugw0UDLD2Ycj4AGPInp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwsh-Vc_ndaRjGxN-t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyMQhgUJcmxPNVTw1N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzvQmgCNO6WSs4e7w14AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]