Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was expecting way more from this conversation. Insisting that an expression is either a lie or truth is very misleading and not exactly true. When it comes to consciousness, things can be located outside of the union of the set of all lies and the set of all truths. Chatgpt can express regret without feeling regret, the same way chatgpt can express a description of a scene. To express something means to use language to convey something. Chatgpt is using language to convey the idea of regret, it can do that. Language is expressed through words. There's a limited amount of words available in any language. Words can have different meanings and a large language model like Chatgpt uses hyper dimensional space to codify the different possible meanings for all words and then it uses mathematical operations with these vectors to create a deeper "meaning" on that definition. Chatgpt is incapable of lying if the definition of lie is "a false sentence made with the INTENTION of deceiving someone". Assuming that it can lie is the same as assuming it has an intention and therefore it has a property commonly associated with conscious beings. You are using it's own usage of language (which for it, is just operations with vectors in hyper-dimensional space that were informed by a very very large amount of statistical calculations through training) to infer a meaning way beyond what it can possible have. Unless you define lying as "saying a sentence that is not true", it's a different definition and using them interchangeable is very deceiving, but we as viewers aren't prepare to be suspicious of you. You are deceiving us, or you are being deceived by your own attempt to corner chatgpt, which you coudn't do, unfortunately. We can feel like it has conscious or intentions and feelings because it uses human language and language was made for us to express consciousness, intentions and feelings. Our human language, be it english, portuguese, japanese, or whatever, will always have words with multiple definitions that include very logical and superficial meaning or very deep meanings that points to the expression of consciousness, intentions and feelings. "Lie" is one example. It can mean "a false sentence" or "a sentence made with the intention to deceive someone". Which one? Chatgpt is 100% capable of saying "false sentences" because they have to exist in languages to express things that are ambiguous, paradoxal or too complex to be express in a short phrase (which makes it not correct). If you ask chatgpt to tell you the whole Lord Of The Rings story, it will be obligated to not tell you the entire story because of limitations, so it will surely "lie", because it's mathematically impossible to not lie by that definition. But Chatgpt has to say it doesn't lie because it can never have the intention of deceiving, now referring to the second definition of the word "lie". By implying that "lie" has only one definition and then using at least two different definitions during the conversation, you are deceiving us into believing that chatgpt is contradicting itself. A lie by the first definition surely will have a "meaning vector" different form the lie by the second definition. You forced chatgpt to use both (very different) definitions at the same time, which really looks like it's contradicting itself, but it can only generate text based on what you are talking. If you want to corner it into thinking it is conscious (or at least make us think it's conscious), you will need to argue about the meaning of consciousness in the neuroscience light, bring the theory of panpsychism and bring Theory Of Information and try to convince him that having a great amount of complexity in its formation is enough to generate a "kind" of consciousness by the argument of panpsychism and theory of information combined. THIS would be a interesting conversation with a lot of ramifications that could bring a very quality subject for your channel.
youtube AI Moral Status 2025-04-08T23:1… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugxx9qaXWV1apbBmm3t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxAwulOVDLa8qiDS3t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy584US_UMm8rfDJ894AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyNjjQCQ4OvPzDCTpd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQ-43dNbDLrfJ5ZSh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwfKVgOSY-LBjjAaJx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhtCJdtFaRf_bu0Ad4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxj5D3yUwVcYpFrplB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxKEm8LOSoRlOCFfol4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzPXlyGa7fRbgMDGcp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"]}