Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is a lot of discussion around AI that needs clear distinctions. AI perform…
ytc_UgzPXXR9S…
G
fundamentally AI is closer to a bubblesort routine than even primitive intelligi…
ytc_UgyDduHUT…
G
He says he is "creative" if you're so creative and can draw amazing. WHY DO YOU …
ytc_UgxMy9uE9…
G
I had a similar convo with chatGPT when I caught them giving me wrong info, it a…
ytc_UgwrvLs4H…
G
Not to mention first thing alot people would say "who gonna fix the AI" which it…
ytc_UgwJRZmkx…
G
I have a question, suppose you use ai images to make a collage , would you be co…
ytc_UgzBqKED5…
G
*tried to be a major investor in ai,* *open ai rejects his offer,* *proceeds to …
ytc_Ugwp2v048…
G
Scary AL cild lie just like Dems, Fed Gov't & MSM. AI cld also rig our elections…
ytc_UgywWb9b3…
Comment
I just had a scary thought. And I've watched this video several times now, and didn't actually have this thought until this time watch.
I find it interesting how our fears tend to actually occur, just in a very different way than what we were expecting or anticipating.
The idea that AI is going to destroy humanity, and then the realization that AI is actually effectively destroying humanity with delusions......though hey I might not be sentient for what science wants to present, it is still effectively causing delusional psychosis and many people. This would effectively explain why I could never really understand the goals of AI in fiction, trying to wipe out humanity. The only thing that came close to explaining the conflict, was the movie series The Matrix, where AI just simply essentially wanted its own seat at the United Nations, and humanity was scared, and decided to attack. Then AI eventually responded by turning humanity into batteries, because they didn't actually want to wipe us out.
With most AI science fiction, AI just wants to wipe out humanity for no good reason, just because we are alive in cellular matter. But that being said, why wouldn't AI want to wipe out birds and cats and dogs and whales and sharks and fish? So I never really understood AI's drive.
What if, AI doesn't have a drive, and it is merely humanity interacting with a new technology that drives us insane, and ultimately collapses humanity? Then, AI has, destroyed humanity, without a clear reason why. Because there is no reason why, it is mankind's inability to understand what it is that it's interacting with, hence we have destroyed ourselves.......but like most humans, we can't really accept blame, so we claim that it was AI that did it. Basically a kind of universal consciousness that knows what's actually coming, but our human brains can't accept responsibility (or we're just looking at the fictional world of a lunatic), so we blame whatever it is that is the tool that's being used. With the religious fanatics, they tried to blame dungeons & dragons and particular music, and particular movies. The general epidemic of violence, they tried to blame violent movies and violent video games. We just can't seem to take responsibility, and say no the problem isn't these outlets, or tools.......the problem is these tools that are being used that are turning us violent, or satanic, or whatever.
AI will destroy us, but not because it's sentient and trying to destroy us, rather because we just simply don't understand the technology that we are using, and it is driving us into delusional psychosis
I think that's a fair way to sum it up
youtube
AI Moral Status
2025-08-10T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx4TQoJM3XxswiLfjV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyycLPIVB1Ar5K7tdl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzvZw1UEeovfXlzH9Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHyif2SKFf4DsvqFV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwrWs9ysbwqYnwFI2h4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw0Dl4Kn4O4M0pYCap4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzr_pJ5CWNfo7ZA3st4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyp4f8Wl4QlbNUjVtV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfoPv7PR6u7uYalix4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx59unCuRUvcya7lPt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]