Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I hesitate to say I was disappointed by how AGI was explained in this video — “unsatisfied” is closer to the truth. My work sits between technology and sociology, and I spend a lot of time on thought experiments because we don’t yet live with AGI. That makes careful, clear-thinking essential. First: will AGI arrive, and how will it happen? Many colleagues treat AGI as inevitable — likening it to the moon landing or the Manhattan Project — and there’s a plausible scenario in which geopolitical competition accelerates development. A global “winner-takes-most” dynamic could push nations and firms to prioritise capability and speed over caution. That makes the race metaphor useful: whoever secures sufficiently powerful systems first will have large competitive advantages in science, defence, and industry. Second: what is the AI we already have, and how does it differ from AGI? Modern systems like large language models (LLMs) are powerful pattern-matching machines. Try flipping a coin, the chance of getting a head or tail is 50/50, less if you consider the chance the coin stands tall on its edge. Now, flip it 100 times. In front of us are two boxes, labelled heads on one, tails on the other. Each time the coin lands on heads, we put a token into the box labelled heads. In the end, we weigh each box, and for the 101 times we flip the box, we guess the coin has a better chance of landing on head or tail, depending on how many tokens we had in our record. It's a guessing game. The only difference is that we give the machine all the information/knowledge we have on the internet from Wikipedia, YouTube, academic research, lab reports from high-tech companies, etc. And the machine learns by deconstructing into machine language, hence the name machine learning. Fun fact, if there is a mistake on internet, and many people citied their work to it as fact, machine learning will use it and produce false information. And in a later time, the false information produce by AI will be again used to train AI to produce more falsehood on internet, so make sure you keep a paper book, and keep it off grid / cloud. This is a known vulnerability that researchers and regulators are actively studying. Chatbots today can carry on conversations, generate images and audio, and convincingly mimic styles or identities. But that is not the same thing as AGI. Think of today’s chatbots as highly sophisticated tools — an electronic abacus or an advanced calculator — while AGI would be most advanced supercomputer, by definition, capable of broad, flexible, autonomous problem-solving across domains (including reasoning, planning, and learning new skills without task-specific retraining). One proposed hallmark of true AGI would be forms of robust generalisation, and some commentators also point to sentience or self-awareness (the ability like a human toddle to look into a mirror and think “This is I.” Which is a commonly underrated phenomenon.) as qualitative markers — though whether those are necessary, or even possible, is a philosophical and scientific open question. Beyond that point, it is no longer a machine, but a new species, an artificial being designed to surpass humans in every way and serve humans as needed. A new AGI with a robotic body from the manufacture will have no distinctive difference from a newborn calf, lamb, and/or human infant. What does this mean for humans. One thing I do agree with Dr. Neil deGrasse Tyson is that we will create new jobs and find new things to do, just like in the movie Charlie and the Chocolate Factory, when Mr. Bucket loses his job in the beginning but return to work with a different role in the end. Yes, we may experience some chaotic times, but we should be better prepared when the time come. In the long term, I will assume human will have unlimited resource after AGI helped us advance in energy and space travel technology. We may live for hundreds of years or even become immortal with advanced medical technology. AGI may free us from labour, including intelligent labour, once and for all. Those are possibilities, not certainties. Equally plausible are scenarios of concentrated power, surveillance, or runaway systems that aren’t aligned with human values. Balancing the upside with the risks is the central governance challenge. Practical, human questions follow. We already debate the rights and status of artificial systems: should an advanced machine have legal rights? Work hours? Compensation? If AGI becomes sufficiently autonomous, these are not merely sci-fi questions — they become legal and moral issues that societies will have to answer. A concrete note on incentives: Sam Altman — among others in the AI world — helped start a project (Tools for Humanity / Worldcoin) that links biometric verification to a digital token. Its designers have at times framed token distribution as inspired by Universal Basic Income (UBI) discussions. But Worldcoin is a controversial, experimental project that has raised privacy and regulatory concerns in several countries. Similar concept can be used as a future UBI system, you’ll be paid based on merely being a human (a privilege?), it may become a part of human rights in near future. Are there any warnings I would like to advice? There is this one thing, when did you last used the word “please” when you enter a prompt in chat bot? When is the last time you smashed a keyboard, or saw someone kicking or punching a machine? If, one day we do share our world with AGI, we need to change our perspective on how to have our relationship with them. Experiments like artificial wombs or new reproductive tech raise complementary ethical questions, but human-robot baby may not be limited to sci-fi. If humans and artificial beings coexist, we will need new frameworks of rights (Can AGI vote?), responsibilities, and mutual respect. I’m open to debate on every point here. John Stuart Mill’s ideal of listening to all voices is excellent guidance for a topic where the consequences are so large and uncertain.
youtube AI Moral Status 2025-11-21T18:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw3rw6_lLhOwXK4B9p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxuAskHLSP92pCFrS54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyyeKd58GGPmClx3bR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzFj_JfsDPLl4pi2Wx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxOnxJYAgZ835BW1w14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy-NDgtC4ptswW6FDF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZqMtdvVoClufwnsh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyymsPfdyYj6LuxL6x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugx_VDbNZWpD1h8Ypf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgySs4RPmQrUs-nyGHt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]