Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is garbage and lately I've noticed that people are useing AI generated script…
ytc_UgxFiifaY…
G
I don't totally disagree with what you're saying, but arguably, for a human to c…
ytc_Ugz4ZUTAy…
G
Bernie Sanders put a post on social media suggesting to combat the fear of AI im…
ytc_Ugzw6-z7Z…
G
Es IA a medias o es full IA? Porque me comí todo menos el canguro…
ytc_Ugwl1xCYA…
G
Soooo the rapture didn’t happen yesterday like the nut job religious people trie…
ytc_Ugw7-UUvH…
G
People supporting AI "art" probably dream of a future where humanity is dead and…
ytc_UgwnrGISv…
G
Real art comes from effort but also through taste and creativity so while others…
ytc_UgxVCHyXR…
G
@daleksconqueranddestroy2264 The point is that they're both _tools_ used by peo…
ytr_UgyZ0IBrd…
Comment
• Timeline and Predictions: Yampolskiy predicts that Artificial General Intelligence (AGI) will emerge by 2027, potentially leading to unprecedented unemployment rates as high as 99% [10:54:00]. By 2030, he anticipates humanoid robots will be capable of automating most physical jobs, including professions like plumbing [22:13:00].
• Safety and Control: He argues that while AI capabilities are advancing exponentially, progress in AI safety is only linear [07:07:00]. He states that developers do not currently know how to make AI systems safe and are merely applying temporary fixes [03:03:00]. Yampolskiy warns against the belief that humans can control a superintelligent system, noting that its actions would be unpredictable [18:56:00].
• The Problem with Superintelligence: Yampolskiy defines superintelligence as a system smarter than all humans across all domains [08:26:00]. He suggests that this would be the last invention humanity ever needs to make, as the AI itself would then take over scientific and engineering processes [26:46:00]. He views this as an existential risk, believing it could lead to human extinction if not managed correctly [27:21:00].
• Simulation Theory: Yampolskiy also discusses the simulation hypothesis, stating he is "very close to certainty" that we are living in a simulation [01:01:46]. He bases this on the statistical probability that a future civilization with the technology to create virtual realities would run countless simulations [57:39:00].
youtube
AI Governance
2025-09-18T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzCpnyLLakVfg2azl54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw1Mq4wjH_4xJgAjp94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz3c_9WNjtMQxUZ3vJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdFXtZWnAwxEJ5TMh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgywOZqpdSGmGOF6b9h4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyRMZ7SOhp3SyCrdkd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwa1VTKm1ztXmQHIkJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzSU7ywsHXqd-K3abB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgysrZ7fp6ccgiMQDgx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgysykGZ0Sz19JIQF9l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]