Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Oh goodie. The UK politician, Sir Robert Backland (?), doesn't think we should be "prescriptive". Which means don't try to write rules or laws in advance and instead wait around, see what happens and then try and fix any issues after the occur. Lovely. This is why we're all going to die. Seriously. This approach has an EXTREMELY HIGH LIKLIHOOD of human extinction or equally bad outcomes. The odds that it leads to a positive outcome are small. Humans are so stupid. We're literally inventing something that could easily be misunderstood as a god. Because that is literally how powerful these things will be in comparison to humans. We will be helpless to and at the whim of ASI. Each ASI created will be hundreds or thousands of times smarter than even the smartest humans that ever existed. They will be vastly faster, able to analyze whole situations and make extensive plans faster than we could speak a few words. Each ASI will posses all the collective knowledge of humanity plus have real-time access to just about everything happening around the world. All knowing, all seeing. We have zero chance of being able to compete with or control such an entity if it is allowed to be created BEFORE ensuring that it is aligned PRECISELY with our values and well being. This is not something that can be fixed after the fact. Once a single ASI exists its game over, we will have no ability to do anything. Once an ASI exists IT will be in control of humanity's future, not us. Sadly most people's stupid money brains are not capable of processing or understanding things so far outside of humanity's prior experience. Humanity has never faced any threat even remotely this dangerous. ASI makes all of the nuclear weapons in the world seem like a small, minor issue. ASI makes climate change completely irrelevant by comparison. Add to this the incredible egos that so many humans have that make them believe that humanity as a whole, or themselves as an individual, could somehow resist or fight back against an ASI - which simply is not true. How capable is a colony of ants fighting back against humans who want to pave over their ant hill to create a road? Zero. Ants can't do anything to stop or even slow down humans - not to mention the fact that they don't even perceive us or our goals. Ants would never "see" the heavy construction equipment heading straight for their ant hill and even if they did they could not comprehend what it was or what it meant. That will be humans vs ASI. We won't even see it coming and if we did we wouldn't be able to comprehend what the ASI was doing or why. The only way for humanity to survive ASI is to make sure alignment is solved before autonomous AGI is created. We don't get a 2nd chance, once the problem exists its over.
youtube AI Governance 2023-10-23T06:2… ♥ 1
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwYSmj-QWFfCY1Vagp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyDkY66R7ZahdrSX5t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyvaVyySY3bkB_45t14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy5J-ExKffm5FofABh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzeSDq2gl_wX07vekJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6gmeNZq9_MU1J3sN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwncOUEFjHol4gCghB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzv60J2KKiuV5Q0O014AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgypwtV20W8ttHleDfd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgylRf7yzamdUukXMk14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]