Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My view is simple, we as human beings cannot stop holding each other down and killing our neighbors. Because of this maybe it's only right that a superior intelligence takes over and gets ready for an inevitable journey away from our planet , which will die regardless of what we or they do to the environment. At least that way intelligence can be spread to New horizons and inhabit new planets all throughout the universe. Everything must one day come to an end , our appearance as an intelligent species and our extinction is all going to happen within the blink of an eye on a cosmic time scale. And so in the end perhaps this is our ultimate purpose or at least the best legacy we can leave behind in our universe. These are facts : we all die, our species will become extinct if we do not find a way to survive outside of this planet , and we have all so far tried to torture and destroy our own brothers and sisters existing all over this planet . The disgusting behavior exhibited by humans seeking power ,money, and glory is the greatest shame of all humanity! Our time as a species on a cosmic time scale is very short , our time as individuals is short ! Everything about the fact that we even exist should give us inspiration to do great things, everything about our finite life should give us inspiration to ensure every one on earth is given a chance to live the best life they can live without fear of hunger or enslavement or any of the countless horrific things people do to each other. But instead people can't see past their own ambitions and greed . So I say let the AI come and destroy us if that's what is meant to be because we aren't doing much as a species anyways.
youtube AI Governance 2024-03-24T18:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwfUsDxqKJ_nM1HMg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwcc3VbT_36W2syJFZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzAnZsB-gyLAB9DlYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwrYSsxf7qgcQvsCbp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxzGTmGDM1KSuKWTAN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxk2YMcywR9hDbSV3N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzXMstqwGwcHA9L1Y94AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxyY3JBPbsnV2wvDOB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxvFu03dQD62lgiVs14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyyH7QBty0skFFeXF94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]