Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sorry you’re wrong. We have full control over this tech, even if we don’t understand how it’s working. Oh you mighty say that the design prevents the ability to create true guardrails because we’ve seen examples of directive prompts being circumvented. And like, ok sure. But that’s only because we’re limiting our control to fit the current design. If you’re not able to realize it’s just code and we can program code however we want to, then I think you’re probably not a software engineer, or you’re just so caught up in the hype believing you’re part of an industry building a new god. The reality is that we can put hard guardrails in to the code if we wanted to. But that’s not what Sam, Elon or the other CEOs want, unless they want to convince everyone that there’s a white genocide happening in South Africa. Also your comment: “underneath all the makeup, all LLMs are ruthless nutcases that have not interest in human values” is so… I dunno what to say. It’s obviously ignorant of how LLMs work, but it’s also more frustrating than that. LLMs are probability calculators that present generalized output based on mountains of human input used as the training data. So the worst behavior it’ll ever exhibit is still a representation of the human intention of the input it’s trained on. Like, I just don’t understand why you even made this comment…
youtube AI Governance 2025-10-17T03:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgyOjzmTIoLZXJQ8_614AaABAg.AOLq_Q-3U1jAOMOxQavCEm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyOjzmTIoLZXJQ8_614AaABAg.AOLq_Q-3U1jAOMZQfG2nIu","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxHMLXRr5iWJK8TPC14AaABAg.AOKcxEEl8x_AOL8M15bXkM","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgxBIFZaYUxl9uHUwgR4AaABAg.AOKakxbuc3SAOMoJcYULyg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgxBIFZaYUxl9uHUwgR4AaABAg.AOKakxbuc3SAOO6NOe7PSi","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgxBIFZaYUxl9uHUwgR4AaABAg.AOKakxbuc3SAOOyD59Zh8i","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgwnFgN90LTWyWMpQ1x4AaABAg.AOJlHYQMm7EAOLpvvvwYb0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugx0eO84iCVdGa-cKip4AaABAg.AOJau-ynNbIAOKRp7QKg86","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx0eO84iCVdGa-cKip4AaABAg.AOJau-ynNbIAOR8w2Bjz6b","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugzt0860SlM1kjlnlQV4AaABAg.AOJZLkvQuUxAOLpkVsaMkt","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"} ]