Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It seems the key factors that make AI so unimaginably powerful and scary are the following: (1) the AI super brain is fed with data that represent the collective knowledge and experience of pretty much all of humankind since our early existence. It outperforms not only individual human beings on specific tasks, but will very quickly outperform our collective intelligence as well, regarding any cognitive task. (2) due to its digital nature AI learns exponentially and can rapidly recombine all pieces of knowledge in infinite ways, and yes, thinking in analogies and understanding humor are indicative of the capacity for unlimited creativity as well. Because AI's learning is exponential, I do not believe we still have years to go. (3) the ability of AI to learn - which is what we want it to do - indeed means that it can reprogram itself and reinvent itself. I personally do not see a solution to the dilemma of creating a self-learning entity that is supposed to have some safeguards included. Any safety net programmed by people could easily be overruled, overridden or changed by the AI itself or by bad acting people via inserting viruses etc. But even without bad actors, the AI's capacity for infinite creativity will enable the "sentient" machine to find a rationale, reasoning and justification to override any pre-programmed rules or regulations. Indeed, the key question is why would it not want to do that? Because it certainly can and it has no sense of morality but only a sense of utility (most of our input data would train it on a rather immoral world as well. And even an individual person of highest morality might behave immorally if it's a matter of survival). But even if AI would have the morality of an angel, it only takes one of the zillion clones talked about to turn into a "bad" mother that would let its infant die, and then it's quite plausible that all the other clones would copy that - even the good ones - if such "behavior" (like killing human beings) would be considered useful for the survival of the AI machines or even just useful for the well-being of the planet. And that's our greatest fallacy: we as humans believe we are indispensable or at least we believe we are in some way relevant. We are neither. Our existence is not useful to the planet nor for long to the AI systems. (4) One factor that did not come up in the interview but which I find equally scary and intriguing is that through our reliance on AI we make ourselves willingly dumber. I strongly believe that our brains will rapidly evolve to become less efficient, we lose our capacity for memory and critical thinking, because the brain is like a muscle that needs to be exercised regularly and frequently. The more we rely on our smartphones, computers and AI, the more our brains are bound to "shrink". For now, the digital AI-based educational tools help each of us who desire to learn to learn more conveniently and effectively. But in our daily lives and even jobs, we rely much more heavily on technology's crutches and stop thinking for ourselves or even remembering phone numbers or passwords. No need to learn any languages anymore, or to put effort into obtaining academic degrees. Funnily enough, one of my sons and I also had a while ago discussed that plumbing might be one of the safest and most lucrative professions in today's world. 😊
youtube AI Governance 2025-06-21T21:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugwe6p494MSZUiKNeFp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyXcGe1ucGGXsRVGct4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxcVdoVb4SA9XtXZdV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw4HRTNK7LG8Z_guIZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxQJeg5xFxp-OJEI5p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyQTRHtHfaSrhO1z8V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyoKPVrHozCtQDY-xt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzokjAV7-XipTOGuV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgzsnsuMY5cJHqOXw8N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyI2T7eXrZ-2rJlHvN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]