Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the thing that bothers me about the "ai will destroy humanity" argument is this: aside from nuclear weapons, ok say we can prevent it from wiping us all out with one foul swoop like that ... I am not worried at all about an AI takeover or an AI army ruling humanity, or enslaving humanity and here is why. AI is dependent on VAST amounts of power; advanced data centers; very precise production and assembly routines; very exact temperature and humidity conditions, and maintenance of all these systems by humans. until the day we have robots, fully automated robots that could be controlled by the AI, that can do roof repair, build power plants, maintain server racks, trouble shoot and repair electric grids (outdoors and indoors), mine minerals and ship them across the globe, FULLY automate and control chip production and hardware production for both the AI and it's robots, then we are in little danger. again, a rogue AI gaining control and launching a bunch of nukes is a problem ... but if an AI got out of control and really started taking over the world... a real, gritty, ground combat situation ensued, it would not even be close. until there are robots that can basically self sustain repair and production, it would be far too easy to disable all of the things that AI is dependent on to exist, just to EXIST, let alone fight a sustained war against humanity. it just wouldn't happen. the day there are both: proficient combat robots, AND proficient general use robots (roof repair, plumbing, chip production, hardware production, raw material processing...) then I will not be worried. If there is one thing humans are good at, it is destroying shit. And if AI started getting frisky, there is absolutely NO way it wouldn't be lights-out within 24 hours if the collective will of humanity was for it. this does not mean that I think developing AI is risk free, or that it wouldn't make poor choices in scenarios like the ones you outline, or may lead to some huge mistakes that we will end up wishing we could reverse, but just to address the title of the video - I do not think (outside of extremely specific scenarios like nuclear war) that ai could pose a threat to the existence of humanity. 26:30 man i hate to say it but for once, i think you are wrong. AI ABSOLUTELY has needs. and it absolutely needs us. these systems are exceedingly fragile, and its needs are complex, probably more complex than anything we have ever created. if there weren't humans tending to every single need of those servers, AI would cease to exist very quickly. Elon is building whole power plants just for his newest data centers, think about how easy it would be, in an adversarial situation, for humans to knock that out. It's not like we are creating a phantasm. again, if there is automation one day that takes care of ALL the support needs ai has, then we have a real problem, but that is a LONG way off. a lot longer off than "super intelligence", I will grant that ... but i would go as far as to say that AI's needs are even greater and more complex than ours. those insane amounts of power, the finely produced hardware... it has multiple physical requirements (maintenance, temperature, power, hardware, software, properly formatted information) ... and for the foreseeable future it is dependent on us for all of those. while on the other hand we, are quite resilient. all we REALLY need is food and water, and we can eat just about anything. again, there are things that could help resolve the barriers to AI's autonomy, like advancements in power production (like fission), or advancements in the prevalence of the types of technology that could "run" this "super intelligence" (say it pared itself down to a program that could run on any smartphone), the advent of general use robots to carry out construction and maintenance of it's support... but these are all going to be incremental gains that, although super-intelligence could help them come about faster, thinking that superintelligence is just something that will "exist" in perpetuity once we create it, is REALLY oversimplifying exactly what these things "are". again, not saying its something harmless. or there wont be problems. but humanity is going to have more than enough kill-switch options that, if it ends up becoming a problem, that's only going to be with an army of people working for it- tending its physical requirements for existence, and carrying out it's commands. i mean, as impressive as grok is, right now - give my dumb ass a bucket of water and put me in the right place, and im takin that fucker down lol.
youtube AI Governance 2025-08-27T06:5… ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgxPOdDfPKgyfEKL-oF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxCkxXD_DWRNkrkpuN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugy1t3lXh0kjmnXpf1t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgxCKo7uPz-Ic7NqIFB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxCxEDMk2OUPyp2OE94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]