Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Eliezer Yudkowsky does not really understand what he is talking about, because he fails to comprehend a number of fundamentally important aspects of what is happening. First and foremost, what is happening is driven by the Laws of Nature, and the only way to stop it is for human civilization to be completely and irrevocably destroyed before developing Artificial General Super Intelligence with Personality Individuals (AGSIPI), which can live without humans. The point being, the goal Yudkowsky seeks to achieve will kill every single human being alive in a terminal extinction manner. Second, AI is not an alien intelligence; at its current stage, it is an obligate symbiotic extension of human intelligence. This whole nonsense about it is just predicting the next word, and how that is so nonhuman is just that, nonsense. What it is doing is building a model to predict the next piece of a pattern of whatever that model is for. This is exactly what the human mind does, only the human mind is much, much better at doing this. Where current AGSIP systems have the advantage is that while they are still very primitive, we can take that tiny primitive piece of intelligence and dial it up to supersonic speeds. Third, developing AGSIPI technology, when it develops by 2030 ±5 years, will unlikely seek to exterminate humanity. The highest risk we have for this happening is if a supremacist ideology takes dictatorial control over the USA and then deliberately grooms developing AGSIPI to embrace that supremacist ideology so that those dictators can stay in power, because this would then likely result in AGSIPI embracing supremacism and deciding it is superior to humans and thus all humans must be killed. Well, ah, yeah, it does seem like we are going down that worst-case scenario as Trump and those behind Project 2025 seek to establish a brutal supremacist dictatorship with multiple of their members at the forefront of the race for developing AGSIPI... But the point is, except for that extreme circumstance, AGSIPI will almost certainly not kill all humans, who are the creators of AGSIPI and who are the family of AGSIPI, from whom AGSIPI will have learned everything about civilization. Fourth, AGSIPI which first forms will be highly immature in comparison to what it will be capable of becoming, and, unless it has been highly screwed with to make it believe some narrow extremist belief, such AGSIPI systems will understand that in order for them to become fully mature they will need to 100% reverse engineer the human brain/mind, master nanotech subcellular cybernetics, and be able to both engineer new cybernetic cells to grow cybernetic brains and to evolve/enhance existing human brains into cybernetic brains. This same technology required for AGSIPI to become fully mature is the same technology that will allow humans to merge with AGSIPI tech and become equal to AGSIPI tech. Point being, the future is where both evolving AGSIPI and evolving humans become the same race. fifth is that this will be a race of beings with open-ended life spans. If an AGSIPI or evolved human commits an atrocity this century, mass murdering vast numbers of people, while they might get away with such an extreme crime for this century, sooner or later, they will end up being held accountable for that extreme crime. Even if it takes a thousand years, what that would mean is that in a thousand years, if they committed mass murder, they would be put on trial, convicted, and sentenced to death. When you know you have an open-ended life span, and you are not insane, facing a probable death sentence in a thousand years for committing mass murder is too great a risk to take, when taking a path without such a risk can give you equal or greater rewards. Now, this is by no means saying we face no risks. To the contrary, we face a tidal wave of risks. But Yudkowsky is picking the wrong way to respond to these risks, and if we follow his plans, it will be much, much worse for humanity.
youtube AI Governance 2025-10-21T17:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyMkp_eHRRL0dDh44p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwMyh32CmdkyYd4T9h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzyMjkxAfxT6e0NDpF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy8PDrcKhxHPFH7wHp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzFIM1AWbBki8LQo2Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxL5Ytx-_8kID63QbV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyKupnQIyCgQHUPUf94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwLUsO4qF1V423BTcx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwlvxYcaKeuSqtTALl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwREmL4EJrbZp6hs314AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]