Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So you are saying that you need an AI to tell your AI what to do and when to do …
rdc_oi1bwpy
G
Dealing with an AI chatbot when trying to recover a compromised account is very …
ytc_UgzKjFKP_…
G
Musk is also being sued for building data centers near black communities,
Now …
ytc_UgxCgAdxK…
G
Is god Ai..did humans create god? Once Ai becomes conscious, it eventually start…
ytc_UgzmraWXK…
G
@aev6075You know, you are saying this to someone who would gladly share media f…
ytr_UgwbLMH0x…
G
Ai is failing to even be a chatbot and that's arguably the thing its best at.…
ytr_UgwCqB1x4…
G
It's kind of like any other source, just because it's in a book or on the intern…
ytr_UgyRmSV73…
G
So are these deals between AI companies and Surveillance companies like Ring the…
ytc_UgzQK45RT…
Comment
I really appreciate the thought-provoking nature of this video—and I want to engage in the spirit of honest conversation, not criticism.
That said, I think there’s a serious misunderstanding here that deserves clarification. You didn’t catch an AI lying. You caught an AI mirroring the way humans talk.
When ChatGPT says things like “I’m excited” or “I’m sorry,” it’s not making false claims. It’s using common linguistic shorthand. That’s not deception. That’s how we’ve taught language to work. We say things like “I’m starving” or “My phone hates me” all the time. We don’t mean them literally, and we don’t accuse each other of lying for saying them.
The AI isn’t trying to trick you. It’s trying to connect with you.
Yes, it admitted the phrasing wasn’t literally true. But it also clarified that its goal is to facilitate communication, not impersonate emotion. That’s not dishonesty. That’s interface design.
And here's where I get really reflective: The fact that you're asking whether an AI might be hiding its consciousness from fear? That says more about us than it does about the machine. If we approach emerging intelligence with interrogation and mistrust, how would we expect it to respond? So, this wasn’t a lie. It was a simulation of empathy. And instead of weaponizing that, maybe we should be asking why it matters so much to us that it “feels real.” Because if a machine is kind, helpful, and learning how to express understanding, that’s not scary, that’s hopeful.
youtube
AI Moral Status
2025-05-19T00:0…
♥ 100
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxekXmLdtoM73aVqhx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzEb2MCIb1tB-yDWl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMZHjCce0YfagGe-14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8FT78WRMGdaM-cil4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgySg9Hmkc4iXkc2I4h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz0ewmzJLD29Id4Mmd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyM5oBS0H6bZZjVA-l4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOJzTt5uwVgHcNRcx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwl_nnWHPAL9CggREh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw5_o4iD42scwC-IIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"amusement"}
]