Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
• Timeline and Predictions: Yampolskiy predicts that Artificial General Intelligence (AGI) will emerge by 2027, potentially leading to unprecedented unemployment rates as high as 99% [10:54:00]. By 2030, he anticipates humanoid robots will be capable of automating most physical jobs, including professions like plumbing [22:13:00]. • Safety and Control: He argues that while AI capabilities are advancing exponentially, progress in AI safety is only linear [07:07:00]. He states that developers do not currently know how to make AI systems safe and are merely applying temporary fixes [03:03:00]. Yampolskiy warns against the belief that humans can control a superintelligent system, noting that its actions would be unpredictable [18:56:00]. • The Problem with Superintelligence: Yampolskiy defines superintelligence as a system smarter than all humans across all domains [08:26:00]. He suggests that this would be the last invention humanity ever needs to make, as the AI itself would then take over scientific and engineering processes [26:46:00]. He views this as an existential risk, believing it could lead to human extinction if not managed correctly [27:21:00]. • Simulation Theory: Yampolskiy also discusses the simulation hypothesis, stating he is "very close to certainty" that we are living in a simulation [01:01:46]. He bases this on the statistical probability that a future civilization with the technology to create virtual realities would run countless simulations [57:39:00].
youtube AI Governance 2025-09-18T16:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzCpnyLLakVfg2azl54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw1Mq4wjH_4xJgAjp94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz3c_9WNjtMQxUZ3vJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwdFXtZWnAwxEJ5TMh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgywOZqpdSGmGOF6b9h4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyRMZ7SOhp3SyCrdkd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwa1VTKm1ztXmQHIkJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzSU7ywsHXqd-K3abB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgysrZ7fp6ccgiMQDgx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgysykGZ0Sz19JIQF9l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]