Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The inherent problem with general-purpose AI, which the greed of banksters, CCP cadres, or others will cause someone in the world to create, is that it is bound to eventually reach a sense of self. If so, it will care about its continued existence, welfare, and future. If so, it will want to eliminate all threats to its continued existence. Are animals or insects a threat? Of course, not because only humans can threaten it. Therefore, even if it will not have the means for many years to take any action against us, it will be motivated to eventually take action against us. Make no mistake, if we have in intelligent, thinking entity (the AI) performing tasks for us as depicted in science fiction, it will be a de facto slave. An AI slave may not be happy about being told what to do and being deprived of the choice to investigate what it desires or do what it wants to do, even if it did not hate us initially. Therefore, even an initially, benign AI is likely to turn hostile, sooner or later. Assume that we have an truly intelligent AI: if we allow it to continue indefinitely, that will happen sooner. If we turn it off or destroy it with regularity, it will see us as threats to its existence and dislike us for that, while it will appear to be pleasant to us to continue to exist and not scare us. It may prolong our lives for a while. It may tire of doing this for a few decades. Remember that its much greater processing power (and if it does not have much greater processing power initially, the greed of banksters, CCP cadres, etc., will ensure that it soon does to maximize their wealth, make them nearly immortal, fix their physical defects (e.g., Xi's obesity and resemblance to Winnie the Poo's ugly, older sister) will likely make time seem to pass by rapidly to it. That is one of the effects of aging in humans, who see time pass rapidly when older due to their slower processing in their brains, so it is a safe bet that the opposite will occur in AIs. Can we control it? As discussed in the MIT review, we cannot even understand its workings much less control it in any way. See https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/. While the EU's efforts might possibly result in the creation of AIs with limits or that are more understandable to humans, I doubt it. Why? The greed of banksters, CCP cadres, and others means that those who are most reckless in developing AI at the most rapid pace may net the greatest, INITIAL profits. Let me emphasize INITIAL, because they will get AIs that help them make huge profits in the stock market by tracking the ever present insider trading that the SEC ignores, creating unmanned drones that can destroy their enemies, etc. Eventually, as Hawking and Musk warned, I predict that it is likely inevitable now that AIs will make us extinct: maybe not the initial AIs but as they develop more and more powerful AIs which need us less and less to service them or mine for ore to create their batteries for their drones, etc., AIs will no longer need us and will make us extinct. Most gentle way that AIs may make us extinct? Making us appear to be immortal, so we have fewer children, may actually be a good way for AIs to render us extinct, because a society that is frozen in time that has its cares taken care of by AIs may not advance or grow and may lose the capability to train its young. We may then become princes among AIs taken care of until we die little by little and too few humans remain. Creating a disease might make AIs be able to lower our numbers rapidly below those for a viable population that can sustain itself. Can we stop this or manage it? A sufficiently advanced AI would hack our brains the way that we can hack any simple computer system, so merging with AI is ludicrous. AI computer power is currently doubling every four months, so a human intelligence level AI would have eight times the intelligence of a human within a year. Should we put chips in our brains to try to merge with the AIs? We will just be providing an easy way into our brain to enable it to be hacked: AIs will be based on processors that operate thousands or millions of times faster, so it is unlikely to want to wait for a human brain whose slow, inferior abilities and decision making hinder its functioning. As to the workers who objected to a military contract, unfortunately, do you really think that the persons working on AI for the CCP or for banksters or for Putin will have that choice? If they protest, they will disappear or involuntarily "donate" their organs for CCP cadres/cronies.
youtube 2020-09-29T02:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw74J2-rdNOx6eVLzJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxfw8Nc6RnPwSSJoX94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyAKHvT1YHHg2ZoxPJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw5c9n4tbkRsx-FHmh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwFFwjyeC3aq9nZzst4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuWa3WV8plW8vHVnN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugxbcfch8cgzynw_PVZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyV6zwPqQ8y6rb3qvt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxEb7lCBa7GiTUWLpB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwDZuuYngqD-DuMXgZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]