Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Absolutely ludicrous doom invoking AGI claims and I am amazed this this is being taken seriously at all. Let me list up some counterarguments. ( I do apologize about the long post, but there are way too many things left out and a lot more to discuss) Intelligence is not omnipotence . That is a given. Someone is watching too many Terminator movies (and even those are questionable if you someone took a minute to think). If I cut off it's data access and physical operating systems then how the fuck would it ever execute all those ridiculous "new ways to take out humanity" ? "Most jobs will be replaced in a few years". Which is an absolutely bold estimate to say the very least. Anyone with even a little bit or research would know why that is a fallacy (huge economic shifts, tons of investments needed to even consider that and so on). A person can write a book why there are more limitations than he gives it credit for. Even super intelligent systems operate in physical and political reality. Which is a fact. It also needs resources, means, the investment power...and so many other layers to even start all those scary scenarios this dude claims. Unless it will create a new life form and a new factory out of thin air and take over your military, or maybe create an explosion from manifesting digital data into a bomb...like seriously. Hardware choke-points exist. I don't understand why he claims that with it's development we will cease to exist after super intelligent comes into light and after it's implementation. Compute supply is very controllable. And you would need a very high computing power to even maintain let alone train a model of that caliber. The idea that anyone would be able to invent or control one from their personal space or laptop is far fetched. There is the term of AI decay, meaning that if an AI model is not connected through a network or internet, it will be much less effective. It cannot pull data out of it's ass just because it's "super" intelligent. Where is it going to store and retrieve and then compute it ? From air molecules or force fields existing in alternate dimensions ? Safety mechanisms are actively being researched on the clock. AI is a tool and has no sentience since it is programmed to gather data and present it in an intended way. That is it's purpose and they are stretching it's capabilities. Humans build systems with structural oversight layers, which he clearly underestimates enormously. My point is that you don't need one global off-switch, since you can control, compute, access, network, localize limits it's environment and energy sources, code pathways, and the list goes on. Like literally if a "super intelligence" is isolated from data entry points, exactly like when a human brain doesn't get information from it's perceived environment, with no limbs, then you have effectively eliminated it. Physical and cost constraints will always exist for another reason. Even algorithms aren't going to be perfect. Heck chatgpt does make mistakes...a lot of them because it cannot define the question or because...my internet connection keeps crashing and I have to reload it. On the other hand people will still pay for handmade goods, seek human therapists / coaches because an algorithm cannot understand on a deep emotional level or relate to a human, prefer human social interaction, go to live concerts, not just Spotify streams, value human leadership in politics (for legitimacy and because of how relatable their goals and visions could be ) and even follow human influencers since they do appeal to a lot of people despite automated alternatives already existing. How about meaning ? identity ? and trust ? These are market drivers too. And even if AI could fake human culture perfectly? Humans still want meaning through doing things themselves. Period. Furthermore, has he ever considered in his mind a real life scenario of how robotics could be implemented in a home setting, (obstacles, tool sets required for the job, and a bunch of other scenarios that need to be accounted for). How about food prep in dynamic settings ? You think a robot would be capable with those few examples ( because there are plenty more if you would just take the time to do some real world research and less doom glooming and predicting that humans would evaporate), to be efficient in what ? A few fucking years ? Think ! We’re not just utility machines, and AGI cannot exist let alone be "untouched or be invincible" for whatever reasons unless all of those restrictions I mentioned above are taken out of the equation...so crawl back to your cave please and learn more about sociology, engineering, economics, and about human history and driving motivators whenever new technologies come into light, and historical events about jobs having lasted despite those changes. (I am sorry I had to say this). Even the dog analogy "There are so many ways you can take out your dog...and it wouldn't even know it...only that you might bite it" Yes that is for certain. You might need a knife, an object, the motivation to do so, something. From physical harming to poisoning it... you need quite a few things to execute that intention. Unless super intelligence goes beyond physical laws, because it understands the universe better than a god, and can manifest digitally right next to you and zap you out of existence. You see where I am getting ?
youtube AI Governance 2025-11-01T16:1… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw9e2t13c1RY5qT0Rt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwJzzWEzbcKUzGV-114AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyV_JwiZeg18hV7IdJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxTyow2c1THme9d8ad4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyp2tp9QBUSGVtLNZV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"approval"}, {"id":"ytc_Ugy-5YrtuPZ4gcw2Skl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx1Wn52OUN18MqLKGp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyjyrPKdBfV7FgUkC14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgwfMk5Njwjjvu408g54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5Wzp-k9lXj8SM5dV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"} ]