Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
why is youtube doing this? to save money. using ai means meaning you dont have t…
ytc_Ugy0oKb3g…
G
So, I have a plan. You can print a batch of small “Generated by A.I.” stickers a…
ytc_Ugxo-v0sL…
G
Do we really need to be convinced, that, a robot doing art is not anywhere in th…
ytc_Ugxbu1jUZ…
G
Brendan, I'm sorry but you are out of touch. You speak as though AI is no diffe…
ytc_UgxBmPS9e…
G
Were in a prompt era. Once AI engines let you input full video cuts and let you …
ytc_UgxuiD0t1…
G
BWAHAHAHAHAHAHA sure thing poindexter. We know it wasnt AI, but the people who b…
ytr_UgybmivUZ…
G
I have nothing against AI actors, bring'em on. I just request one tiny thing, th…
rdc_lucdxdi
G
@C@Chris_the_roblox_fire_exe well, to start it literally harms the environment a…
ytr_UgwoeV_ye…
Comment
Summary for everybody who does not want to wait 1,5 hours:
Key Insights
🤯 Control Gap Widening: AI’s exponential growth in capability far outpaces linear progress in safety research, creating a dangerous gap that increases the likelihood of unintended and uncontrollable outcomes.
🧠 Total Job Automation: AGI and humanoid robots threaten near-complete automation of all jobs, rendering traditional retraining strategies ineffective and necessitating a rethink of social and economic structures.
🚫 Illusion of Control: The belief that superintelligent AI can be simply turned off or controlled by humans is fundamentally flawed due to distributed systems, backups, and superior predictive capabilities of AI.
⚔ Race Dynamics Increase Risk: Competitive geopolitical and corporate pressures incentivize rapid development of potentially unsafe superintelligence, raising risks of mutually assured destruction without international coordination.
🧬 Human Enhancement Limits: Biological or neural enhancements cannot keep pace with silicon-based AI, widening the cognitive gap and exacerbating control and alignment challenges.
🌌 Simulation Reality: The increasing realism of AI and virtual realities supports the simulation hypothesis, providing a philosophical lens on existence, ethics, and the nature of intelligence.
🤝 Urgent Need for Collective Action: Despite uncertainties and difficulties, raising awareness, engaging in democratic processes, and supporting AI safety research are critical to mitigating existential risks posed by superintelligence.
youtube
AI Governance
2025-09-04T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyykzXdt2o81q17dm54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzhy7kqjCnTLb7ye4J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxL15j4pTEUpxxgBfJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwsU7hKABRbaRriA-R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7lAfdYX5cQsq3kV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy2hCDvuVP0k8_Hp6x4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxeSsPFJ2QN5a1bXtR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw8sD9nDWid_JTVYhl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgygL5d5ZoLfTwNEoop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwY44G6K9FS20On6iV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}
]