Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI wont get rid of artists because we like the act of creating, something i feel…
ytc_UgzBdODQd…
G
With Ai mimicking deep human interaction, this just screams of automated spycraf…
ytc_UgygoCEFv…
G
Or just dont use AI because its terrible for people, the environment, people li…
ytc_UgxaO5WMB…
G
*If humans cannot fully understand A.I. - then transparency and scrutiny is impo…
ytc_UgxqPm9OA…
G
@Alfalfaprime every prediction we had about AI has been wrong. It is evolving m…
ytr_Ugz6MyvpD…
G
Well, I would have to say that yes, it is ethical to stop progress, and I will b…
ytc_UggI9deWo…
G
Who is this girl? I thought this was ai.? She's so ignorant lol who is this…
ytc_UgyX5qqxJ…
G
I went into a psychosis and the chatGPT conversations I was having were insane. …
rdc_mukdmud
Comment
Timestamps
00:02 - AI poses significant risks, leading to potential unemployment and safety concerns by 2030.
01:52 - Dr. Yampolskiy highlights AI's rapid advancement and safety concerns.
06:06 - AI safety progress lags behind advancements in AI capabilities.
08:11 - AI will achieve artificial general intelligence by 2027.
12:36 - AGI will drastically reduce job opportunities, leaving only a few human roles.
14:42 - Many workers deny AI's potential to replace their jobs.
18:22 - Unpredictable consequences of widespread unemployment due to advanced AI.
20:14 - Future advancements may challenge human intelligence's competitive edge against AI.
24:16 - The singularity will cause rapid, incomprehensible advancements in technology.
26:07 - AI will automate all jobs, altering the future workforce drastically.
29:57 - Humans may lose control over AI as it becomes super intelligent.
31:39 - AI development is inevitable, but focuses should shift to narrow AI.
35:19 - Technological advancements may lead to dangerous superintelligence without regulation.
37:12 - Synthetic biology poses a potential extinction risk through advanced virus creation.
40:51 - AI development evolves into a science of observation rather than programming.
42:41 - Concerns about leadership and safety drive attrition from OpenAI.
46:44 - Human actions are crucial for a positive AI future.
48:37 - The enforcement of AI accountability is fundamentally flawed.
52:07 - Focus on narrow AI to prevent existential risks.
53:47 - Protests against AI ethics are crucial but challenging to scale.
57:06 - AI and virtual reality may prove we live in a simulation.
58:50 - AI will revolutionize simulations, surpassing real-world experiences by 2030.
1:02:23 - Discussion on morality and ethics in technological simulations.
1:04:06 - Simulation theory impacts human perception of meaning in life.
1:07:18 - AI could significantly impact longevity and societal dynamics by curing aging.
1:08:51 - Advancements in AI may enable humans to significantly extend lifespan.
1:11:59 - Bitcoin's scarcity makes it a unique investment amid economic changes.
1:13:42 - Concerns about Bitcoin's future and the concept of living in a simulation.
1:17:00 - AI discussions evoke mixed feelings but drive progress through uncomfortable truths.
1:18:57 - We must focus on local realities amid overwhelming global information.
1:22:47 - AI and automation will significantly reduce job availability by 2030.
1:24:35 - Loyalty is the most important trait in relationships.
youtube
AI Governance
2025-12-10T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzByY8yCi9ddD7P5p14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy67SWPHGkooo3JbPN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwHzPiMWXIPQfmHhiV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyNu77TW2xgqfe8Ro94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwQ43Uy04efFF0dGaV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxodt6AVDBvFzZzkbB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwuyc1rfaVq6PmgRDl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxNgg4PtOWfaQOAwWt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhAKtXRfer-44MYBt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxQ5mUqCWgQLw1TQp94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]