Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So super intelligence is 3-5 years away at which time we will all lose our jobs …
ytc_UgxDzBWAD…
G
As a software engineer its absolutely possible that an ai could gain what we cal…
ytc_UgzKHr679…
G
So, it appears that private public entities are acting as law enforcement which …
ytc_UgyadR4WW…
G
The humor in imagining AI like ChatGPT leading to a human encampment is spot on …
ytc_UgzHe5D3z…
G
Bible says dont accept A.i and be living cause a.i is watching .. hard to unders…
ytc_UgxLm_5TE…
G
Write a lovely poem about Donald Trump
I'm sorry, I cannot fulfill this reques…
ytc_UgzveU2iW…
G
I’ll be afraid that that robot might become self-aware and try to kill people sh…
ytc_UgwbRCvcS…
G
is there a level of TDS skewing his views? @34:42 that is a huge thing to say. I…
ytc_UgyuMbqxN…
Comment
This guest explained the dangers of AI in such a straightforward way that it became the best podcast on the topic I’ve heard on this channel. For the first time, I truly grasped that if AI ever got out of control, no one could simply “turn it off.”
Although the guest didn’t explicitly say this, it led me to a larger realisation: shutting down AI assumes a single person or authority can make that call. In reality, it would require a global consensus- something that takes far too long to achieve.
We’ve seen this before. Even with COVID, when the stakes were clear, the world couldn’t coordinate a unified response. So why would we expect a perfectly aligned, instant reaction to a runaway AI?
And beyond that, imagine trying to power it down: by the time we reached consensus, the AI could anticipate the threat and act to protect itself. Plus, there will always be those who wouldn’t want to shut it off - no matter how destructive it becomes. The ppl who owns and depends on AI will stop you from shutting it down.
youtube
AI Governance
2025-06-16T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxAaqdt6hkbpTLF5MB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyOVSxAyodeRHKW2Wp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyfJQTkQEmjjyV7JLd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzvqyjrNT8gmLunRYp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyL41DmLJM5vQJ-PAV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHXDzJjQAo4DHIxg94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxDu52TQ1vd6D9LmL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJ_EO-GJ2P-nEWZUd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwzeDSIfoQ3fmOx0e54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz1ULc0u2RnHWa7Gwd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]