Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find all this talk about computers taking over the World surreal. I see no similarity whatsoever between the way that a computer works and the way that biological brains think. The computers that we have today are *exactly* like those from 1953 wnen Von Neumann wrote "The Computer And The Brain": they are machines that execute a program. In fact, computers still use Von Neumann's architecture, which is based on a binary code. I think that the problem here is an issue of semantics, of the definition of the word "intelligence". The computers of today are much, much faster than the ones from 1953, and are capable of executing much more complex tasks. So people like this man and Raymond Kurzweil equate "intelligence" with speed and complexity. But intelligence is not speed and complexity. This is why we have autistic savants that can perform incredible mental calculations, and yet are clearly less intelligent than the average human being in most survival-ability traits. *All* a computer does is follow a program. No computer has ever been able to do anything beyond what it's program says. A computer that is performing a more complex task is simply running a more sophisticated program that has been given to it by a biological brain. Even computers that have been programmed to have some "randomness" ingrained into them, like computers used to help forecast future weather, are still only as random as the program allows them to be. It never exceeds the randomness that has been programmed into, and it never makes the decision to break free and explore more than what it has been given. Biological brains have been programmed by Evolution for *survival* which involves not only predicting changes to a chaotic environment but also figuring out how to *manipulate* the environment to it's benefit. And furthermore, and above all, how to manipulate other thinking beings in competition for those resources. This creates a thinking "machine"(brains) that is very, very different than a computer. It creates "probabilistic World simulations" and the running of multiple of these PWS at the same time is what creates what we call "consciousness". At the lowest level, like that of an amoeba, it is a very vague awareness of physical reality. At the highest level, like that of a human, it creates a "theory of mind" that allows you not only to create self-awareness, but also the the ability to simulate the thought processes of other self-aware beings. There is *no* evidence that computers can mimic this. Consciousness is a property of Carbon-based brains, and not possible with Silicon-based machines. That is because the process that creates consciousness requires emotions, which is tied to the biological ability to feel pleasure, pain and other purely physical sensory experiences. Furthermore, the hyper-flexible thinking of conscious biological brains requires constant pruning and building of dendrites and synapsis, which is nigh-impossible to do with computers. This is not to say that true A.I will not happen. It will happens latter this century to the end of the next century, as humans *voluntarily* merge with computers to enhance their mental capacities, becoming essentially cyborgs.
youtube AI Governance 2025-06-16T10:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyJQKiX554Mqk_k9Rx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgztoVaOePQSQxpFUFN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwxIWF9p3zKggVtMTV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwzgk7vT2408EDMSh14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzslLXv0Un5e9InBE14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwYrj26y4CyzIsscMd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwHuR7mWuVK2gAiWRF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzo6ut3Mm-g7p04l5Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzqrTwXKxAjcB5s-vx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwhdIQkttAjjCF9RwB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]