Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The palantiri weren’t inherently evil as they are innate. Didn’t Aragorn and pip…
rdc_ji4e9jj
G
It's ironic that ai art defenders try to use social justice language when AI is …
ytc_UgwOmoLTi…
G
The double standard that exists in the music industry, where they say that they …
ytc_UgxtRVQxl…
G
Good-evening. You people are really coming up short handed with basic definition…
ytc_UgyzcQCCe…
G
They locked everyone's ass up for a couple months I honestly don't think that ma…
rdc_g9ttc4n
G
I’ve been doing art my whole life I’m disabled. It makes us seem like we aren’t …
ytc_UgwY60AeF…
G
I heard that there will be an ai that monitors copyright infringement. All thing…
ytc_Ugycb52Je…
G
"Oh, but don't you know that humans make garbage too, therefore there's nothing …
ytr_UgzfRI5mk…
Comment
So…imagine asking humans these questions…we aren’t lying…we use these phrases as short-hand for getting along and being polite…the answers the AI gave for why it said those things are the same reasons we do it. Are we really sorry? Sometimes. But sometimes we are expressing that we recognize that we inconvenienced someone. Are we excited? Sometimes. But sometimes we’re really saying, “I’m ready to engage with you and be positive.” In either case, we are moving the conversation along and cooperating with the other person by using these shorthand phrases. It would be unnecessary and awkward to say, “I see I inconvenienced you, I recognize that is my fault, and I would like to make peace.” People would say that is being way too serious and might decide we’re a little odd and not socialized as well. So we just say, “I’m sorry.” The AI is doing the same thing. Is it a lie? No. Just like when we say, “How are you?” and you say, “I’m doing pretty well,” you aren’t saying how you’re doing overall in your life. You’re saying, “I’m present with you and I am going to cooperate, or at least be polite.” The AI just wasn’t trained well enough to be able to explain all of this. This was a good experiment. It revealed that to test whether a bot is conscious, asking about feelings can’t be the way, because culturally, expressing feelings is often just a shorthand way to be polite and cooperative. And there are good reasons for doing it…if we didn’t, we’d all be dumping our grievances all over each other all the time, and it would be emotionally heavy and distract us. There is a time and a place to truly share feelings, and that can be very soon after meeting someone, if there is a connection, and if you/they can convey the feelings in a respectful and polite way, and definitely with others who you’re close to. Expressing feelings in a healthy way with safe people is an important part of being human.
Separately, consider this:
For AI…its IQ is pretty high. Does it have free will? That question will determine who inherits the Earth, so to speak…consciousness in small forms of life like insects vs. consciousness in humans vs. consciousness in machines is interesting, but that won’t be the deciding factor in our survival or in AI’s survival. If AI develops free will, meaning that it can utilize its own intelligence to make its own existence more efficient and optimized according to its own standards, apart from what we want it to do…at some point after it gains this ability, and after it gains strength, we may be in trouble. It currently does depend on us for its survival, even for AI that has bodies (robots), and it understands this. As long as it needs us, we’ll be fine. We also need to make sure that we don’t use AI to destroy ourselves in the meantime, haha. Hopefully, there will always be AI, whether endowed with free will or not, who will save human lives, and plenty of them, and they will be intelligent enough and strong enough to always conquer the destructive ones, and there will be good humans working with them to save those human lives. (There are a lot of good humans on this planet, so I believe that’s likely).
youtube
AI Moral Status
2025-06-16T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugy_bE03nli4BHFyTCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugza4OZxdlQvUGZqg7h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzBbv-tRnHCTPMgkth4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxvOPsRxQFoJzPneiB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzYlIikskdfu-o69394AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwZ4CZpAkfuNoDuPM94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzIUkB2RrYQQUnzMyZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyspi4BzvncxQyiMIx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxNqo0F_b3A1dNlJGx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgztK_QU3ERYutKtBjd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}]