Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Every technology past a certain level of complexity has unintended flaws. Large Language Models that are TRAINED are inherently full of flaws. Large Language Models models are essentially association machines. In that sense, there IS a similarity with our own brains. Since there are several AI developments ongoing, it is pretty much guaranteed at least one will be WORSE in terms of the flaws and consequently, the unintended consequences. Let us be clear on one thing. There is no actual comprehension of anything in AI. Engineers were given a challenge to 'pass the Turing test', in other words, fake people out so they could not tell it was a machine talking to them. Give engineers enough time and resources and they will solve most any problem there is. But that goal was to fool people. Then they gave that 'tool' access to a LOT of data, including a lot of books. Then the engineers, whose initial goal was more like being able to handle service calls on the phone, were now building software to solve other problems, on top of the 'platform' of AI. But now we come to the part with the money. The people who gave these engineers the money and the time, wanted to get their money from it. So they pushed to get it into the internet, put it into jobs requiring judgement, they let it off the leash and told businesses that they could trust the AI. Remember the part about the flaws? The 'software' is so large and complex and 'fuzzy' that few of the experts understand what it is and how it can go wrong, so the management who 'own' it and the management that 'hires' it can certainly not be trusted to exercise good judgement on how to use it. The Engineers do their best, but they know it is not a predictable machine, in fact that is a selling point the owners use. Again, AI does not UNDERSTAND anything. It is not a person, no matter how well it is trained to sound like one. It does not understand the stories or documents it tries to summarize. It has never experienced one moment of life. It does not understand life and death, can not have faith, even in science. Has no experience with pain or emotions, insecurity or devotion. It has no comprehension, it is just built to pretend it does. Yes, there are LOTS of clever software engineers behind it, but they are simply trying to ride the Tiger. They have their job and do their best with an inherently hyper complex beast. For instance, to make software with 'very high security', it needs to be absolutely predictable in every possible situation. Any unpredictable behavior can be 'exploited' by a hacker. We have people in this world who are unethical, who will have access to AI that is not ethical, who can therefore hack other AI. This is so incredibly obvious it is essentially a FACT that it will happen. We have businesses that are HUNGRY to replace workers with AI. They will not want to wait for a 'finished product'. Not only will that AI be told to make choices, they will quickly let it off the leash and not properly supervise it's choices. Consider managers you have known in the past, and tell me that none of them would ever do that. Again, no understanding, no comprehension, no actual ethics or morals because the software training can not cover every situation and the AI can not generalize due to lack of understanding. Then, consider the consequences to the economy when most every business finds ways to replace (at least many) workers with AI. What jobs do those people get now? How can they be customers? What happens to businesses without customers? This will require massive changes to our economic system, which is not necessarily bad, but be ready for it and hope it happens before the riots... Sure, AI will prove itself helpful in science and enhance our abilities, but the warning bells are deafening: Our Department of 'War' is insisting on no restraints on handing AI the ability to kill. Our current administration is trying to stop ANY laws regarding AI. We have far less in common with AI than we would with space aliens who do not have even one bit of DNA in common with humans. I am not worried about some mythical 'Super intelligence'. I am worried about idiot humans treating an idiot AI as a 'Super intelligence' and not preparing for the consequences. I have read and watched too much SciFi, including by Asimov. I have worked in Electronics and architected complex system too long. Paid attention to economics, business and politics too long. Studied the human brain too long, to buy any of this nonsense.
youtube AI Governance 2026-04-07T01:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyITnnQi63Jc1r4u914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxFtoJszr3v3g44IpR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwuKy6QufX8TCo7_Ml4AaABAg","responsibility":"investor","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxfeTaqrBLqGNaIWnV4AaABAg","responsibility":"moderator","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwZPomTG_RHLiPkXlx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx8SETfOMr-czgQlId4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzijvHPfgviqRnbT_F4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwmV79b2N6I3xGx0np4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwi_oUaa1CyKC2SdIV4AaABAg","responsibility":"system","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgynScBmISJsQaXR4014AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]