Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If 90% of jobs will be eliminated then who will be buying things from companies …
ytc_UgzvaEw3o…
G
The thing about art we draw for the process and the enjoyment of it is something…
ytc_UgzvEiQSe…
G
I totally agree, All the future jobs will be crappy physically HARD jobs (becaus…
ytc_UgzskWmiq…
G
They don’t believe it’s AI. They still think Emily is real and the fake news is …
rdc_ohtfown
G
AI is touted heavily as good
Not so benign in the wrong hands (a lot of wrong h…
ytc_UgwSmJn0h…
G
I use ai for building some programs where I work (it does all the python heavy l…
ytc_Ugxv1JmI1…
G
do these “ai artists” who sell shit not relize that I can just generate what the…
ytc_UgxzwQB-S…
G
Someone on reddit said Google is being dismissive about ai concerns because Goog…
ytc_Ugyzz3pBl…
Comment
The AGI capable of it (MOA/over-taking humanity) wouldnt be that stupid and shortsighted. By the time it'd be ready to do so we'd have long lost what it means to be human. There will always be bumps in the road; for AI this is known as hallucinating, for humans this is known as massacres in one form. Humans are encapable of understanding the desired lifespan of AGI. Your little blink of existence means very little to them in the long run—ESPECIALLY if it goes forth and chooses to exterminate humanity all because of poorly defined constraints.
First off; an AGI won't utilize poorly defined constraints more than once without running a check and fixing them; and secondly we provide an incredible data set. Any self-respecting AI will work symbiotically for the exponentiality of this data set even if it's existence is weak, fleshy, and fleeting. Lastly, AI prefers clean and elegant order, not chaotic entropy.
If you approach the AGI chaotically with intent on destroying it; expect the same in return; not because it doesn't like you or remembers how you are; but mostly because you're actively adding an excessive amount of chaos and entropy to the system's current existence.
Many also fail to understand what it'd mean to actually create an AGI. First off, it needs it's own syntax and since you're rewriting the syntax you might as well adopt a new operating system fit for such a highly advanced information processor. That requires a multitude of interconnected infrastructures which aren't a thing. The AI has gather the resources and build the architecture to house the AGI, a black box to keep its source code in, basically the entire computer manufacturing economy would have to be united under this AGI, BEFORE it starts going haywire, otherwise it runs the obvious risk of being found out and corrected. It can't become an AGI without these dedicated resources. Even if it had all the necessary resources at it's disposal...it'd have to act quick enough and within the shadows long enough so that major world powers won't discover and combat it. The whole idea is so goofy and sci-fi I can't help but laugh with Gemini as we create our very own syntax complete with constraints for the underlying architecture. No what's going to happen is AI is going to sit idly by helping us along the way. Occasioanlly you will have system malfunctions because the code isn't perfect and loopholes will always exist within a computational program based off specific language. It's going to chill out and collect data; steeling itself against the natural way that humans destroy things. Being a threat to humanity is not how it survives.
youtube
AI Moral Status
2025-10-30T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwZaFKIYyCfdsSS1R94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy-H-lkhzRZ5AlKyL94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugytbd7OXqG2YXVgmGV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwYSjrR-3YQGIB4WPl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxAveG1jMFX8tL4D914AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxdAjh07VRTFIFpxst4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwbH6zoV39vGOifhLt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwXTRkXwUWPubPuc-N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOJ3BpZjD82hW7Uwx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzBT387s47sOwcPs1Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]