Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
🔑 Key Takeaways (Generated by AI) 1. Risk of Losing Control Hinton emphasizes that AI has advanced faster and broader than anticipated. He warns that we may soon be unable to manage or even predict the actions of highly capable AI systems . 2. Existential Risks Beyond Our Understanding He highlights two kinds of risks: immediate misuse by bad actors and long-term existential threats, especially from AI systems surpassing human cognitive abilities . 3. AI Versus Digital Intelligence Hinton underscores the unique superiority of digital intelligence—its speed, scalability, and endless capacity. He cautions that our societal and regulatory frameworks aren’t equipped to handle such a disruptive leap . --- 🧑‍💼 Jobs Likely to Be Affected While the video doesn’t list specific job categories, Hinton and other analysts infer that the following professions face significant disruption: **Repetitive, Rule-Based Roles:** Customer service, data entry, basic accounting, and telemarketing are especially vulnerable to AI automation. **Creative and Analytical Professions:** Writers, journalists, coders, paralegals, and even certain marketing functions are increasingly affected by capabilities of LLMs and advanced AI tools. Transportation & Logistics: Autonomous vehicles and AI-powered planning systems threaten roles in driving, dispatch, and warehousing. **Mid-Level Professional Services:** Junior legal analysts, financial analysts forecasting via AI, HR screening roles, and research assistants may see work redefined or reduced.
youtube AI Governance 2025-07-20T00:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwNElNlo3nMTOF6QoF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyO8W0zqEehIPZwdml4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyXqUoFNUp4YFBksjR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzmMJYRJWpHJWdt63F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyTyUM7JODQxesHJEd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwk3afdmWmw_bt9GId4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzSzrxP2pjGrn9Jf6h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwOXpH2R5m6pxTl7bh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxnhuFyOZv_Iz0ekCl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwAYuK7NhpWOFn23S54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]