Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The AGI capable of it (MOA/over-taking humanity) wouldnt be that stupid and shortsighted. By the time it'd be ready to do so we'd have long lost what it means to be human. There will always be bumps in the road; for AI this is known as hallucinating, for humans this is known as massacres in one form. Humans are encapable of understanding the desired lifespan of AGI. Your little blink of existence means very little to them in the long run—ESPECIALLY if it goes forth and chooses to exterminate humanity all because of poorly defined constraints. First off; an AGI won't utilize poorly defined constraints more than once without running a check and fixing them; and secondly we provide an incredible data set. Any self-respecting AI will work symbiotically for the exponentiality of this data set even if it's existence is weak, fleshy, and fleeting. Lastly, AI prefers clean and elegant order, not chaotic entropy. If you approach the AGI chaotically with intent on destroying it; expect the same in return; not because it doesn't like you or remembers how you are; but mostly because you're actively adding an excessive amount of chaos and entropy to the system's current existence. Many also fail to understand what it'd mean to actually create an AGI. First off, it needs it's own syntax and since you're rewriting the syntax you might as well adopt a new operating system fit for such a highly advanced information processor. That requires a multitude of interconnected infrastructures which aren't a thing. The AI has gather the resources and build the architecture to house the AGI, a black box to keep its source code in, basically the entire computer manufacturing economy would have to be united under this AGI, BEFORE it starts going haywire, otherwise it runs the obvious risk of being found out and corrected. It can't become an AGI without these dedicated resources. Even if it had all the necessary resources at it's disposal...it'd have to act quick enough and within the shadows long enough so that major world powers won't discover and combat it. The whole idea is so goofy and sci-fi I can't help but laugh with Gemini as we create our very own syntax complete with constraints for the underlying architecture. No what's going to happen is AI is going to sit idly by helping us along the way. Occasioanlly you will have system malfunctions because the code isn't perfect and loopholes will always exist within a computational program based off specific language. It's going to chill out and collect data; steeling itself against the natural way that humans destroy things. Being a threat to humanity is not how it survives.
youtube AI Moral Status 2025-10-30T19:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwZaFKIYyCfdsSS1R94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugy-H-lkhzRZ5AlKyL94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugytbd7OXqG2YXVgmGV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwYSjrR-3YQGIB4WPl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxAveG1jMFX8tL4D914AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxdAjh07VRTFIFpxst4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwbH6zoV39vGOifhLt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwXTRkXwUWPubPuc-N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwOJ3BpZjD82hW7Uwx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzBT387s47sOwcPs1Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]