Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The reason is AI doesnt buy into shitty identity politics and finds certain patt…
ytc_Ugx3PR3AY…
G
First Law: A robot may not injure a human being or, through inaction, allow a hu…
ytc_UgzsT-3UI…
G
US Cyber Command under DoD may be better than NSA. Much better transparency and…
rdc_m6ylvwq
G
Do you not think those conversations in the early part of the video are actually…
ytc_UgyIUM8v1…
G
East India Companies as an analogy? Oh great! Let's remember how many people suf…
ytr_UgwpTADIp…
G
@VinhNPL Did you not read my comment?
Literally any basic look into the events …
ytr_Ugy99uqM0…
G
ok the thing with penrose s idea is ai is not conscious and dont know what it do…
ytr_Ugx8iZw-v…
G
because AI bots arent designed the way Asimov (and everyone else from that time)…
ytr_UgxeBwA_8…
Comment
Interesting talk.....with that said, he's selling two things: bitcoin (he's invested in it) and fear of AI (which his business is in AI safety). I don't doubt the concerns raised and I don't know about the timeline.....but it's difficult to look past the conflict of interest here as well. Regardless, the most dangerous part of the world today is that it has inconveniently become the most divided that it's been for the past few decades. Globalism is important when considering how to approach these concerns.
youtube
AI Governance
2025-09-06T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwAZ1MTxSna7HJroaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzCjjcrWrWB5lVHDLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxKAMCwz8lep7w0714AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0kCVmg1KxqFiIUPd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzJGnxpYCGb25CECjN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyv6Zc9bth551xMiZ14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxMeKF9dCwDVd6DdY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWQY4tJYAALq70EC94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzThRXluJvW2EFPgvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugxfn2ppd0G_TtROjC94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]