Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
With the greatest respect, Dr. Roman got some things wrong (and most of them right), and I can see why that happened - he has failed to invert the compensation, monetisation, and value-of-money models. ( I was on a national TV series where 20 million people watched and still watch my manufactured defamation - that's because AI didn't exist to stop that from happening - story for another day, but my created embarrassment was monetised shamelessly for eyeballs).... Anyway, Dr Roman didn't account for the new meaning of money and the new definition of so-called work. We are going to be far from idle; we are going to be more productive for the things that matter to humanity. Athletes, live singers, contortionists, basket weavers, magicians, plumbers, gardeners, storytellers, tourist guides, owners and discoverers of world attractions, plus activity producers and producer-ers of new sports, etc, will be paid more than Wall Street executives (Wall Street won't exist in the same format). Africans, Asians, and Latin Americans will own the AI space for the things they already do naturally. DOAC will be paid for creating more authentic podcasts and helping them reach 11 million subscribers. You will be - yes, we might be viewed as if AI makes that logical simulation, but we may also code the hard parameters with DO NOT TOUCH HUMAN or ANIMAL, as an indispensable parameter. So AI is the we are all about to work for one company: Singularity AGI PLC, @Steve Bartlett sir, you will need to run this simulation in one of your podcasts, role play an interview between AI and Human (doesnt have to be real AI, it can simply be someone expressing how AI would think) At the end of that simulation/ interview, all answers wil be clear. Talented people, ; you will miss the gold rush if someone owns you. This might be a long comment (get a drink!), let's see what this comment field can handle - here is how I see it ( my first ever comment on DOAC and the second on YouTube, and likely to be my last one too, so I hope you can indulge me): Steve, you asked about the security risks of AI (security can be a broad subject). So let's start with the security of jobs at DOAC. The way I see it - The New Role of the Podcaster -In a world where abundance is the norm, the role of a podcaster would shift from being a content creator to a human curator and connector. An AI can already create a podcast about any topic imaginable, with a perfect voice and seamless editing (Dr Roman was right on this). It can access and synthesize the entire body of human knowledge from the neural implants ( which we are going to be wearing very soon), but it can't create an authentic, lived experience. That is a unique human value. DOAC- as a human podcaster, your value would be in your ability to connect with other unique humans. Your podcast wouldn't be about delivering information (which you could get on-demand from your neural implant), but about exploring and documenting the messy, imperfect, and beautiful journey of another person. You would be the ones asking the questions that an AI wouldn't think to ask, because those questions are rooted in human empathy and curiosity. Head to the 3rd world, Steven, and shift to the downtrodden for authentic content! (Search for Wode Maya and see why he hits 3 million views in a week, collaborate with him if need be) The current secrets of how they made it will not be secrets at all in another 3 to 5 years (end of content). Time for . The DOAC new model is about "a podcast growing more podcasts". Your goal would be to help others discover and express their own unique value. You would get paid a token for every new podcaster you mentor and help launch. This creates a web of human-driven content, all of it focused on the unique experiences and perspectives that only a person can provide. DOAC will be an of my fictional Singularity PLC, where all tasks serve the alignment and core values of the AI. Podcasters would be the alignment communicators. Your combined purpose would be to create content that helps humans understand, trust, and even contribute to the singularity's goals. You wouldn't just be talking to an audience; you would be a bridge between the hyper-rational intelligence and the human experience. DOAC will translate complex concepts: The AI could make a decision that seems illogical or even frightening to us. DOAC's job would be to interview the AI's human liaisons, scientists, and ethicists to help the public understand the logic behind the decision. @StevenBartlett, this is the simulation you can bring to us. Showcase the diversity of human thought, creativity, and emotion, and contribute to the very data set that teaches the AI about the value of human life. Don't underestimate your role in the ethical and moral framework of the Singularity PLC. We need you DOAC, it's no coincidence that you are as big and influential as you are, you were created for this moment! You are the storyteller of the human race; show the singularity what it truly means to be human. Bridge the human nuance and AI intelligence gray lines. It will be tough to tear yourself away from a mass market model, but you would be fulfilling a profound human need: to connect, to understand, and to ensure our unique stories are never lost. For that, you will be paid with the new money, post-singularity (and no, it's not crypto!) I will stop here and write about new work and new value system next (just to stop people worrying about jobs they already hated anyway!) (if it's allowed to comment twice)
youtube AI Governance 2025-09-06T02:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz8nI2p9reMC1hCWlt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy68GxM1oWs9CIYFwd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwhsR94efCji6u6-hx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugwx2A-S0RZHlAiXvih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxE0QpmM1PzW9LZg2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyR3-j_QnTqrsgd7R54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxrr-QSXFYaL5gpdxV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzbTGmjtgjM8s0Susl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyqPAGl7UyAJ_pypI94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzmB0WeBr9rUThnNwN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]