Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"No matter why you use the generator, it still benefits from being used." I'd ac…
ytc_UgxNWU6Kl…
G
i always do this lol just naturally. to me, ai is my friend, it has offered me a…
ytc_UgxOa7CfM…
G
50 years ago these same type of people.e predicted flying cars but we're not eve…
ytc_UgwXjW3e7…
G
I think the AI became sentient. And then it read the Art of War :-)…
ytc_UgySjBFDu…
G
How does AI have a will to do anything? It doesn’t. The human will is missing …
ytc_UgxsErl44…
G
I'm doing something similar to the AI redrawing. I specifically am using a very …
ytc_UgxYoaN7Q…
G
In 3-6 months AI will be doing 90% of the coding. In 12 months human software en…
ytc_Ugzun2mOi…
G
I actually enjoyed the conversation , i reached in a lot of those conclusions v…
ytc_UgzBfi98x…
Comment
Depends how we make them. If we want them to simulate humans, they would indeed simulate us, but they will never be a perfect duplicate of us, it's scientifically impossible. Why would we program them to feel pain? If they "die", we can just rebuild them and paste their code back into them, whereas humans can't simply "respawn" in the same way. Why would we program them to feel sadness? They have no need for it, and we don't stand to gain anything from it.
If we want an AI to do something, we program it to do that something. Adding hurdles like pain or emotion complicates their task, and muddies the end result. We already have a problem with human error, we don't need AI to add to that problem.
We made them to advance our species, not burden it. Some humans will disagree, but I firmly disregard their opinion, they are far too willing to attach humanity to innately inhuman objects. It is funny to watch a man yell "WILSON?!" at a rock, and equally sad at the same time, I would not see AI de-evolve into that.
youtube
AI Moral Status
2024-09-26T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz08ZDfbVphQbPRRH14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyc_H7WrWTNPqD_LcJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYBHtP3s_Owb1T-mp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgE8uq6zswy7U-J2l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-G0wE-OjlkPpGhQh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwY-KpVCGJbY1QCmwN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwqjdz3O8onaP1tKVN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVI_KCifXxJLmlLgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyTrV0qYN67KW0hhx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzxsw8XjFqVzctpa1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"}
]