Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robots don't and can't pay taxes. So who is going to. Living is not free!!! I wi…
ytc_UgxRZWlP9…
G
6:18 Thus far, seems to describe the whole Anti-AI bros problems. Bias.
I will …
ytc_UgxK7_rE6…
G
I want to be worried about this, but I can't just because most people won't be a…
ytc_UgwO2pJQN…
G
It's important to continuously reflect on the lessons learned from various AI in…
ytr_Ugx0IN04O…
G
AI will destroy everything this is very sad it looks like real. when ai robots w…
ytc_UgwANB1qo…
G
Its not about AI, its about AM, autonomous machines, machines that can decide by…
ytc_Ugy5-yQNd…
G
I think doctors will probably become obsolete at some point in the future. Id im…
ytc_UgwZMxAhM…
G
Everything they post to drum up propaganda is AI generated because reality doesn…
rdc_o1jdu65
Comment
I'm currently working on a concept for a globally governed, politically neutral AI Fail Safe. Something to act as a last line of defence against any catastrophic AI event. What started as an idea is now refined into a blueprint holding detailed implementation plans, governance structures and political negotiation tactics needed to bring it, or something like it, into existence. I'm no expert in those fields but what I am is an extremely quick learner and capable thinker, with the most useful tools the world has ever seen at my disposal, which I've used to design what I believe is truly our best shot at minimising the indisputable threat AI holds against humanity. I can say with absolute certainty, what I have is the single most effective solution that has ever been publicly proposed, and it is designed in such a way that no single individual, entity or organisation could ever garner control of it. Unlike almost all proposed precautions within the AI safety industry, it is not designed to limit or restrict the capability of AI, nor it's advancement. Not only is it effective, it's viable; balancing navigation past the opposition of AI restrictions from AI industry leaders with complete global and politcal neutrality. What I need is a voice. I've only very recently started my own independent research around AI safety, and so with virtually no credibility in the field, every attempt I have made to get this infront of the right people has been ignored. I have already contacted Dr. Yampolskiy in the hopes that will change, so thank you for the video, Steven. It has given me another open window to try and get what I have into the right hands, but if you see this comment and think you can help then please do reach out. It may be all for nothing, and I may well of wasted the last couple months of my life, but as Yampolskiy quite rightly says "we have no choice but to try."
youtube
AI Governance
2025-09-05T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwckb2AuBdutcEmYyh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwuxKvjqYit_Fys2Rx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyG9NJ5OgqkOKPhAvF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"},
{"id":"ytc_UgygPrtqlPFzDWK19np4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyqwU9Ij9N6CXin9Vl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwlFdNvCM-QkY8xCiF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgwcBZ2vNRd379CC3914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgywVL7zmtzFeYjZS1x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz9ASwLg5HP3PNwH5N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxFsY-91Xm94EidVGh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]