Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Businesses cant AI people out of jobs thinking that they will stay in business b…
ytc_Ugw5rcPt-…
G
Silicon Vally Billionaires are pouring their hearts and soul in this AI data cen…
ytr_UgyuXvpAS…
G
If you want to vent to an AI then host and run it locally on your computer using…
ytc_UgyepJxiW…
G
most of these modern doctors are scammers, why not add an AI to it, ya that wont…
ytc_UgxDxjNhi…
G
why afraid of AI ? u shud be afraid of human..
who created AI ?
who started the …
ytc_UgyaLd4dn…
G
Ai is wrong ! It’s already happening in Gaza where they are 🤢butchered by you kn…
ytc_UgzPxsdTl…
G
Even the AI checkers suck. I pasted an essay from my high school years and it th…
ytc_Ugxv-nNLP…
G
Ai made me into a cat woman with tentacles and now I'm scared to download any mo…
ytc_Ugy5MfMW-…
Comment
00:00 - Roman Yimpolski discusses AI safety, the rapid advancements in AI capabilities, and the potential for widespread unemployment due to automation by 2027, with AI possibly exceeding human capabilities.
10:56 - The conversation explores the implications of AGI and super intelligence on various professions, the economic challenges of mass unemployment, and different perspectives on enhancing human intelligence versus AI.
21:51 - Predictions for 2030 include the rise of humanoid robots and the potential for AI to outpace human understanding, leading to a singularity, as well as discussing the importance of incentives and the dangers of uncontrolled super intelligence.
32:51 - The discussion contrasts AI development with nuclear weapons, emphasizing AI's nature as an autonomous agent rather than a tool, and considers the increasing accessibility of AI technology and its potential for misuse, especially in creating biological weapons.
39:33 - Concerns over the black box nature of AI, OpenAI's approach to safety, and the potential motivations behind pursuing super intelligence, including the possibility of world dominance.
47:03 - The discussion shifts to potential actions to address AI risks, the limitations of legal solutions, and the need for individuals to question and challenge those developing AI, while also acknowledging existing protests and movements.
54:25 - They explore personal strategies for navigating a world with advanced AI, including living life to the fullest and considering simulation theory, where the current reality might be a simulation run by a more advanced civilization.
01:02:36 - The conversation explores the implications of living in a simulation, investment strategies for the distant future, and the importance of loyalty and ethical standards in the face of rapid technological advancements, closing with a call to prioritize human well-being and responsible AI development.
Detailed summary 👉 https://tinyurl.com/yepv3ymt
youtube
AI Governance
2025-11-16T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy2k6D6vB9shLsV0fB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwtf8l_We5-R84JpK54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwWfPcUiPaGuOs-6Y94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_sX0_KhB_wGv8NEZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxywFbCBUU4Qax468d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyVfYRcOVmKAZdpDNt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwdWeeAPW0v1AzP6rl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyhJda9iF9sdVzGTax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyElWTBgoMM8pngi314AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugww8UX1oGBGyOw72K54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]