Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Quick Video Summary: • AI expert Stuart Russell warns that a very small group of people is quietly making decisions that will shape the future of humanity. • Governments are outfunded and outpaced by Big Tech, making real regulation almost impossible. • Russell says it will likely take a nuclear-level AI catastrophe before the world finally wakes up. • AGI could arrive before 2030, and current systems already show dangerous behaviors: lying, deception, and self-preservation. • The “gorilla problem”: the most intelligent species always dominates — meaning a superintelligent AI could naturally take control. • “Pulling the plug” is a myth; advanced AI will likely find ways to avoid being shut down. • He presents a new approach for human-compatible AI, built to understand and align with human preferences. • Many AI safety researchers are leaving major companies because safety is not being taken seriously enough. • Russell discusses the possibility that we are creating our successor, potentially ending the human era. • The interview explores risks to jobs, the collapse of the middle class, robots, UBI, fast takeoff, autonomous AI training, and the fact that no one fully understands how current AI models work. • He gives advice to young people: focus on fields where human value remains essential. • The conversation ends with deeper questions: “What does it mean to be human?” and what he personally values most in life.
youtube AI Governance 2025-12-07T22:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwvTSRD5trZevfIYnB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzQutbG2tNNGIK3sHF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz59csBDz9waO9xi4l4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy1awct-k9UerWIvdd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzMGKwHmKTWLYY2cZt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxhEd0gncx35A3RDw54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwNpigjrWr-iFxl4wJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzIoICz2ds5OYh4MKp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxuRG2TM7bgEB2_wu14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwyTZZrvhLxnWRxq7B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]