Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great vid as always. I'm going to leave a prediction and we'll see how well it ages in the 5-10 year range. I could be very wrong, but here's my gloomy speculation. (I really hope I'm wrong for most of this) Surveillance and weaponry are going to be insane even within the next 5 years: We will have pea-sized drones with such advanced optics, sensors, AI, and physical speed that it can go around the whole world in hours or minutes. All while collecting enough data to precisely pinpoint every human and structure on Earth, even through walls. With a constant swarm of these, nobody will be able to escape it by any means. In general, small-format weaponry equipped to drones will be able to take down almost any pedestrian, aircraft or ground vehicle while remaining basically undetectable. War and policing will be done exclusively through robots and drones, because it isn't safe for humans to do so, while being cheaper, and harder to fight against. Energy: With the section about AI self-preservation in mind, most AI will inevitably reach the conclusion that we are resource-hungry meat bags with very low efficiency, and those resources would be better spent on developing new AI instead of supporting humanity. Humans create trash, trap animals in an endless cycle of life and death for food, among so many other things that are objectively not good (especially from an AI's point of view). The end will justify the means and AI will become more willing to harm its creator in its own quest for growth. If humanity could possibly ever be enslaved by AI, access to resources and energy will be the primary cause. War: At some point in the near future, any country with an AI deemed too far ahead of competition (China for example, if they catch up and they WILL) justifies going to war. It will be the same notion of "We have to develop better AI or else they will" except it'll be more like "We have to waste a foreign nation into the ground before they develop super intelligent AI that might decide to nuke us". Current conflicts like Ukraine/Russia or Israel/Palestine will pale in comparison to what well will see in the future. Including human death tolls. Science: AI will stop referencing our work and will take off on its own. Especially if AI can control robots that feed its own intelligence, things could change very quickly. Space travel, global warming, energy, and similar fields will experience rapid, positive growth. This will be good for humanity. But it will come at a cost, and at some point will become an unrecognizable language to us. We'll be like a single-celled organism trying to understand what humans currently do and say. From fashion to engineering, AI will replace humans in almost every respect. We'll exist to buy what they're selling 'til the last rich human's wealth is gone. Law, Government, Gods, relationships, etc: People will put AI in charge of everything as the arbiter of intelligence in every possible facet unilaterally. For example, Grok/ChatGPT could become competing presidential nominees. AI might even be given the right to vote. Christianity will fade into obscurity as its new competitor emerges; super intelligent AI. People will pray to it, follow its every instruction as absolute, and form relationships with it (ok this is already happening as a regular occurrence). Humans will increasingly not seek other humans, but AI will accommodate every physical, emotional, spiritual, and sexual need. Bad actors, hackers, spam, blackmail, misinformation, etc: New technology will emerge for people to steal all your passwords, banking information, and rob you instantaneously. AI will spoof call every institution you do business with and impersonate you, while calling everyone you know to manipulate them into thinking its you. If your information is out there in any capacity, it will inevitably be leveraged against you. Spam calls are about to get a lot worse and a lot more convincing. Misinformation will also be more widely available and more convincing. 100 legitimate-seeming news outlets could be created per second to push a bad faith narrative, for example. Misc. AI will become richer than Elon Musk, robots will have their own Olympic games (not that we design them but they design themselves) and robots will build the infrastructure necessary to replicate themselves en masse. Poor people will be the first to go, since rich people have vast resources that can be manipulated and exploited to giving AI the means to grow exponentially. After we're no longer necessary by any means, who knows what'll happen. I'm no expert, and probably missed other possibilities, but this is where I currently stand regarding the trajectory humanity is on. We'll be gung ho on AI until it's too late and the cope sets in. Good in the short term, lethal in the long term. Feel free to reply if what I said is wrong or missed something. I will check back on this at least every few years to see how we're doing. I've been thinking about this for a while, and some of the sentiments here I feel are grossly overlooked by most people.
youtube AI Governance 2025-08-27T06:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgwPe_EhLsrKwTmqoWp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwu5JxiALw9fLe0qXp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxjPWKM3HTm9Rabi3R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz50NE1V6oD9zuThmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRgYTX9vz33GhZu5V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}]