Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
🤖 [00:19] Geoffrey Hinton, "Godfather of AI": Developed neural networks, left Google to warn about AI dangers. ⚠️ [00:46] Superintelligence Risk: AI becoming smarter than humans and posing an existential threat. 💻 [05:46] Superior Digital Intelligence: AI's ability to share information faster than humans. 🛡️ [07:22] Misuse of AI: Concerns over cyberattacks, bioweapons, and election interference. 🤖 [25:53] Lethal Autonomous Weapons: AI making decisions about who to kill without human control. 📉 [00:39] Job Displacement: AI potentially replacing many jobs involving intellectual labor. 💰 [00:54] Wealth Inequality: AI exacerbating the gap between rich and poor. 🧠 [01:02:42] AI and Emotions: Hinton argues AI can have subjective experiences and emotions. ⏳ [00:44] Superintelligence Timeline: Hinton estimates superintelligence could arrive in 10-20 years. ⚖️ [01:05] Need for Regulation: Strong government regulation is crucial for safe AI development. 🔬 [29:38] Safety Research: Significant resources should be dedicated to AI safety research. 🗣️ [01:22:34] Personal Regrets: Hinton regrets not spending more time with family due to work. 🐕 [30:03] The "Pablo" Analogy: Illustrating the intelligence gap between humans and superintelligent AI.
youtube AI Governance 2025-06-18T18:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgxupByq1pJU7KQwkDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwwXkNltFeimPmJMZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyqBsep7so0OTfonf54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwfI1vnL8WvQKAFH-R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgziafO7OcSIak-2lXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy6rwnX5lC-hwWsLEh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwFuc4NmZft8ezsNg94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxlHbfZ_Kf0goYUIWp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxnLmiNozAbiWg42y94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiCIBcaMIxrDypsY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]