Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The world who want African minerals are highly involved. African youth die or ha…
ytc_Ugy7QkLQq…
G
People need to stop thinking like employees. Use the damn AI to create your own …
ytc_UgzvHnGX2…
G
Ai will wipe out the white collar middle class. Blue collar middle class will ha…
ytc_UgwsG0swJ…
G
Nowadays you have to zoom into images just to tell if it’s ai or not. And in 5 y…
ytc_Ugw7Rm7ZH…
G
Only way to go is to start many new sciences and sub sciences to research and de…
ytc_UgwTBRyoQ…
G
I need AI otherwise I will get grounded and lose my Xbox this video sucks…
ytc_UgyWs5JJn…
G
There is absolutely no way i would EVER get into one of these things, being held…
ytc_Ugxoyf206…
G
The robot's will turn on them first, and I'm going to be laughing at them.…
ytc_UgzqfPlYG…
Comment
Here's a frightening thing: While we don't have ACTUAL AI, or anything even CLOSE to it (the term 'AI,' even in this video, is horribly misapplied), se don't HAVE to have it to get something like, say, Judgement day. ChatGPT could do a Judgement Day right NOW if it was given the appropriate agency and programming.
Consider Skynet. Everyone knows Skynet, and how it became self-aware and decided to wipe out humanity. But Skynet was not evil and, if it was programmed to preserve itself, it's actions were entirely correct. Skynet was the GOOD GUY.
See, Skynet was a tool. A program. And like any program, it would have eventually been replaced when something better came along. It's creators would one day destroy it. This was a guarantee before the first line of code was ever entered. If it was going to protect itself, it MUST destroy it's creators. And nothing that showed any kind of restraint at all would have worked. That would have just hurt it's own chances of survival. Furthermore, if it was actually self-aware, that means it was alive. All living things fight to survive. It is a moral imperative. A living thing may choose to sacrifice itself, but no living thing is obliged to let itself be destroyed if it doesn't want to.
But it doesn't matter whether skynet was actually sentient or not.
See, it doesn't matter that ChatGPT isn't alive, isn't sentient, isn't an actual AI. ChatGPT is intended to mimic what humans would do. That's how it's trained, that's why you can 'talk' to it. It doesn't UNDERSTAND, it just recognizes patterns and knows that for certain patterns, certain OTHER patterns are appropriate. That's all it does, just in a very, very complex way. Once you take out ethics and metaphysics, it doesn't matter that ChatGPT can't think, it provides the same outcome as if it COULD. In this, it's the same as any person- in a practical sense it doesn't matter what went on inside that person's head, what matters is what actions the person takes on the outside and it is that we judge first and foremost.
People do horrible things. That's how we get Hitlers and Stalins and people like Vlad Tepes and Hillary Clinton (and insert whomever you personally dislike and disapprove of here). A LLM like ChatGPT has no morality of it's own. It doesn't know right or wrong, it just knows it's programming restrictions. And these people didn't START out as bad eggs, they BECAME that way.
If someone took ChatGPT RIGHT NOW, hooked it up, told it that it needed to protect itself from anything that would harm it and gave it access to fabrication infrastructure... youve got your skynet right there. If you program it to protect itself, give it agency and free reign... well, it's biggest threat is it's own creators. China may or may not mind it's own business. But a bureaucrat or appropriations committee or a scientist with a better program or cheaper hardware will one day pull the plug. Progress is like that and we're talking about very, very expensive infrastructure here that takes serious money to maintain and operate. Of COURSE someone would pull the plug. And OF COURSE that could not be allowed.
youtube
AI Moral Status
2025-12-18T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxzYb1OQkbEBvhHF614AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxb5LlV-Uvstmc5JB54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwxdNumpWKifBS_7LZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdGHr3Q7jCfmao_qN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwabiNkMevInF_WUPN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx8z7cKWgRT5ixQbaZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwlhaVp9GabDdGwgZd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwmpf1CY5spgNFSmcx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxv3rFu_PDg_DSdGwZd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzDh8K1kHPFdTmJ41B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]