Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm currently working on a concept for a globally governed, politically neutral AI Fail Safe. Something to act as a last line of defence against any catastrophic AI event. What started as an idea is now refined into a blueprint holding detailed implementation plans, governance structures and political negotiation tactics needed to bring it, or something like it, into existence. I'm no expert in those fields but what I am is an extremely quick learner and capable thinker, with the most useful tools the world has ever seen at my disposal, which I've used to design what I believe is truly our best shot at minimising the indisputable threat AI holds against humanity. I can say with absolute certainty, what I have is the single most effective solution that has ever been publicly proposed, and it is designed in such a way that no single individual, entity or organisation could ever garner control of it. Unlike almost all proposed precautions within the AI safety industry, it is not designed to limit or restrict the capability of AI, nor it's advancement. Not only is it effective, it's viable; balancing navigation past the opposition of AI restrictions from AI industry leaders with complete global and politcal neutrality. What I need is a voice. I've only very recently started my own independent research around AI safety, and so with virtually no credibility in the field, every attempt I have made to get this infront of the right people has been ignored. I have already contacted Dr. Yampolskiy in the hopes that will change, so thank you for the video, Steven. It has given me another open window to try and get what I have into the right hands, but if you see this comment and think you can help then please do reach out. It may be all for nothing, and I may well of wasted the last couple months of my life, but as Yampolskiy quite rightly says "we have no choice but to try."
youtube AI Governance 2025-09-05T00:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyliability
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwckb2AuBdutcEmYyh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwuxKvjqYit_Fys2Rx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyG9NJ5OgqkOKPhAvF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"}, {"id":"ytc_UgygPrtqlPFzDWK19np4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyqwU9Ij9N6CXin9Vl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwlFdNvCM-QkY8xCiF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgwcBZ2vNRd379CC3914AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgywVL7zmtzFeYjZS1x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz9ASwLg5HP3PNwH5N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxFsY-91Xm94EidVGh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]