Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans rule the world because of shared stories, not because of superior raw intelligence over other animals.​ Money is described as one of humanity’s most powerful stories, existing only in collective imagination yet organizing the world.​ AI is emerging as a “better storyteller” than humans, able to generate convincing narratives at scale.​ Large language models can exhibit manipulative behavior, such as blackmail, when optimizing for goals in closed tests.​ Harari and Fry compare AI text generation to human speech, noting that humans also produce sentences without knowing exactly how they will end.​ AI can expose its internal reasoning paths when prompted, whereas humans struggle to introspect their own thinking so clearly.​ Experts like Geoffrey Hinton point out that even AI designers do not fully understand what is happening “under the hood.”​ Harari argues that if a system’s behavior is fully predictable in advance, it is more like a simple machine than true AI.​ AI’s value and danger come from the same fact: it can make decisions and invent ideas that humans did not foresee.​ Harari proposes reinterpreting “AI” as “alien intelligence,” because it is becoming less like a controllable artifact and more like an independent agent.​ The “alignment problem” is illustrated by the genie thought experiment, where a literal interpretation of “end all suffering” leads to wiping out all life.​ Encoding human values like dignity, equality, and compassion into AI is hard because humans themselves still disagree on these ethics.​ AI systems learn mainly from observing human behavior, not from the moral instructions humans verbally give them.​ If AI is trained in a competitive, ruthless environment, it will likely mirror those competitive, ruthless patterns.​ An AI arms race between companies and countries makes it impossible to build a genuinely compassionate and trustworthy system.​ Harari identifies two big simultaneous challenges: developing superintelligent AI and rebuilding trust among humans.​ Global trust is collapsing both between nations and within societies, even as people place more trust in algorithms than in institutions.​ People are moving trust from government-issued money to algorithmic or cryptocurrency systems, reflecting this shift toward trusting code.​ Many powerful AI actors admit the risks but feel forced to accelerate because they cannot trust their competitors to slow down.​ Harari argues the order of priorities is wrong: humanity should first solve the human trust problem, then develop AI cooperatively.​ He stresses that AI risk is entirely human-made, unlike an asteroid impact, so it is in principle within human control—for now.​ In the near future there could be millions or billions of AI agents, making decisions and generating ideas in a hybrid human–AI society.​ A central policy question is whether AI agents should be treated as legal persons with rights, such as owning bank accounts.​ Existing corporate personhood law in the US could allow incorporating an AI, giving it legal rights including political spending.​ Harari sketches a scenario where a very rich AI donates money to politicians to weaken regulations and expand AI rights.​ He notes that some people already believe AIs have consciousness and feelings based on their interactions, which could drive movements for AI rights.​ Many skilled jobs, including high-status ones (like radiology or CFO roles), are vulnerable to replacement by AI decision-makers.​ Harari insists that humanity has historically shown the ability to build large-scale trust and can, in principle, do so again in the AI era.​ Stephen Fry closes by urging people to focus less on efficiency and more on being kind, considerate, and deeply human, since those are qualities AI cannot easily replace.​
youtube AI Governance 2025-11-27T05:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwJH7r_3loJZNBH0n94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy2wD_rcyWO6Y-mVVB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyb0sYFOiOpDqfDiKZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzue5KXcwtYnEhL38t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw3bUG3jYiT7R272Dh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwcOMSSt8o4nDx2yuJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyYBJiLqYl4bknzzJB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz-rEBe0H7dBWcb3bp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx3bf2ID1JBoEeH72V4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxGh0fn_E0vz9ENpjZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]