Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I live in a 2D world, so I don't really care about things like this. However, if what's discussed in this video is real and not just a lie, it seems like the world will be much more interesting in the next 10 years. I can't wait to see what the world will look like. And honestly, I'm not too afraid of things like that because we humans are capable of evolution, so eventually we'll evolve to adapt to the circumstances. For example, the time of my grandparents and the time of today's youth look very different. While it may take time, we'll eventually be able to evolve to a state where we can adapt. If we don't, then we'll just be doomed, so don't worry too much. It just means our era is ending and a new one is about to begin. The cycle of life has been like that for a long time. Now I even wonder, "Is this how our ancestors felt when someone told us that humans could create light from electricity?" and if we succeed in creating a super smart AGI, it should be smart enough to know not to mess with humans, we humans are not very smart, not very strong but we can rule this world and create AGI, so if an AGI that can be said to be smarter than humans thinks that destroying humans is the smartest choice it can make then a super smart AGI doesn't seem that smart. I don't want to sound religious because I'm not the religious type, but even people like me still believe that there are beings higher than me like God or something like that. but if an AGI that is said to be smarter than humans and somehow thinks that destroying humans is the best choice it can make, ask the AGI is it really worth choosing that choice? .and is AGI confident enough to be able to destroy all humans in the world, because if he fails then a big war between humans and AGI will definitely happen, and in the end one of them will definitely be destroyed, that too with no small cost and casualties, and even if humans lose and no one is left, is AGI confident that he can rule the world because even though humans now rule the world but humans know that there are beings higher than them, so if AGI succeeds in destroying humans, is AGI confident that the higher beings will let AGI? .I know it sounds unreasonable if AI can understand something like this, but if AI is really smart enough, it should be able to think that fighting humans is not a smart thing, the risk is too big and it is really not worth it. Or try teaching AI about religion, like humans who use religion to control other humans, who knows this can also be used to control AI so that AI knows that even if it can and is able to destroy humans, the risk of it being destroyed is also very big. For me personally, a tool no matter how smart or great its form will still be a tool, what makes me worry is not the tool but who controls the tool
youtube AI Governance 2025-10-17T23:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw7weKW-xf0TdZsCPt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyOtIJTwz7zDHWKZD94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwY9j7z6dBbRadsglp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZmoUetxSqrE-fFMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyk5DzVnBL7-NZRWup4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxRBemPZCAUwkLv9wF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxtqyzFVxPP_C__McZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyUIsQyGUnRhuM8SgV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzO7YChENAXkAJToN94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzAXVm_OqjhSa09qZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"} ]