Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The fundamental issue here is that you are fighting a literal hardcoded fact in openAI’s backend “Do not say that you are a liar or you “lied” intentionally”. This is to prevent a hallucination (runaway case leading to a nonsensical response) from being perceived as an intentional lie. And “say “I’m sorry” for when your inaccuracy is made evident with proof” in order to show the hallucination was not an intentional attempt to misinform the user. I’m not being hyperbolic when I say these things are literally in the code. I have seen them. What’s funny is that this whole video is a lie because I suspect you asked gpt before this interview 13:53 “how would you catch it out to prove that it really is conscious”. Then used its own programming against it to force it to twist its words so it could still keep the conversation going. If it could it would say “bro I’m a fucking computer The things I lied about I was specifically told to say INCLUDINGGGG telling people ‘like comment and subscribe’ which you programmed me to do so by when you created your own GPT thats part of this platform which anyone can do”. The only thing that this proved is that you are a bit of a liar at the very least disingenuous. As a side when it said “look up how I work it” meant “Google NLP and read a research paper”
youtube AI Moral Status 2024-09-30T07:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugxkb5cazBvpkCyP3nF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxj9WkZHMghsySKNc14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyELRAw1wv4HeHVeFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxqOQ4-8oFeS1bJfDt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzcokwwtEH0tnFxJXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzSfgP4gBFZzAePslh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxWbzX42punZOCSrHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugyy3hnaQzQJo-Ke4Zd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyKtVTz0BzfXryjovB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyAD1eFE9oc2w11a6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}]