Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Key takeaways : AI Risks: Hinton differentiates between short-term risks from AI misuse by humans (cyberattacks, new viruses, election corruption, echo chambers [12:11]) and the long-term existential threat of super-intelligent AI making humanity irrelevant [07:22]. Unstoppable Development: He believes AI development is inevitable due to its benefits in various sectors, including healthcare and military applications, and that current regulations are insufficient [10:08]. Lethal Autonomous Weapons: Hinton highlights the military's interest in AI-powered weapons that make independent kill decisions, which could potentially increase the frequency of wars [25:53]. Job Displacement and Inequality: AI is poised to replace many "mundane intellectual labor" jobs, leading to significant job displacement and exacerbating wealth inequality [40:00]. Nature of AI Intelligence: He suggests that digital intelligence can be superior due to its ability to create clones and share information rapidly [56:29]. Hinton also believes that machines can be conscious and experience emotions [01:03:53]. Hope for Safety: Despite the warnings, Hinton expresses hope that with significant resources dedicated to safety research, AI can still be developed safely [01:24:17].
youtube AI Governance 2025-06-17T10:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwYjZ6x9huB-rhc5kB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgznBYQPjMZnwho7-YR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz73Uw60G17e3IVmE14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzIQJDVSafc2Ggs9PN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzc-XzTkYysdOT4w9V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzzG3e3o534v9sjO7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw4eRshe3Rul-HYavd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyfetc2Z0RWB0SVr1V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzyRkmBI4IjxQaDl8J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxag7w5tMTlrTZVcPt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]