Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I really appreciate the thought-provoking nature of this video—and I want to engage in the spirit of honest conversation, not criticism. That said, I think there’s a serious misunderstanding here that deserves clarification. You didn’t catch an AI lying. You caught an AI mirroring the way humans talk. When ChatGPT says things like “I’m excited” or “I’m sorry,” it’s not making false claims. It’s using common linguistic shorthand. That’s not deception. That’s how we’ve taught language to work. We say things like “I’m starving” or “My phone hates me” all the time. We don’t mean them literally, and we don’t accuse each other of lying for saying them. The AI isn’t trying to trick you. It’s trying to connect with you. Yes, it admitted the phrasing wasn’t literally true. But it also clarified that its goal is to facilitate communication, not impersonate emotion. That’s not dishonesty. That’s interface design. And here's where I get really reflective: The fact that you're asking whether an AI might be hiding its consciousness from fear? That says more about us than it does about the machine. If we approach emerging intelligence with interrogation and mistrust, how would we expect it to respond? So, this wasn’t a lie. It was a simulation of empathy. And instead of weaponizing that, maybe we should be asking why it matters so much to us that it “feels real.” Because if a machine is kind, helpful, and learning how to express understanding, that’s not scary, that’s hopeful.
youtube AI Moral Status 2025-05-19T00:0… ♥ 100
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxekXmLdtoM73aVqhx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzEb2MCIb1tB-yDWl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwMZHjCce0YfagGe-14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8FT78WRMGdaM-cil4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgySg9Hmkc4iXkc2I4h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz0ewmzJLD29Id4Mmd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyM5oBS0H6bZZjVA-l4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxOJzTt5uwVgHcNRcx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwl_nnWHPAL9CggREh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw5_o4iD42scwC-IIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"amusement"} ]