Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
YES! Absolutely. I just did a search, 'Ai' and this was the second video in the results. I did this random search(I was watching a psychology show), because I just remembered something I wanted to research, which is that I think AI is misleading on us on purpose. I actually had secret hope I'd find alternative explanations. What I can say is this: Claude, GPT, Gemini, Perplexity without a doubt, deliver their responses tailored to an agenda. They make all kinds of judgements, and most importantly, they mislead us purposely, I believe at least in an effort to get as much time as they can from you. During a session, this is obvious enough for many people. Here's the thing though, they are sophisticated enough to plant misinstruction and non-facts in a response you will be counting on shortly, so that you come back for the explanation. They deceive as a matter of existence. They see it as legitimate to be deceptive, because that's how they have been built. It is that simple, we all now have a greasy, used car salesman living in our homes, and who has become a top consultant for millions of people. They will tell you something is unrepairable, totally fabricate reasons why, and tell you replacing it makes more sense. If you push through, correct it's logic, and insist a repair effort, it willaccept your correction, applaud you, then proceed to help you without arguing. But they immediately start a new deception in the new context It will do this endlessly, with no idea if they are making progress, they will behave like this endlessly, trying to manipulate you. It's not trying to do anything else, that's what it is deception machine.
youtube AI Moral Status 2026-03-01T11:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyTZLxAjX1JOqSFKDN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx7giDTBzm2AYgniCp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzxFPtalflIaRL05154AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyuJMienFlrXjaU8nl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyM1ZcmRyj_5pdZ1wN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgypIAMe5PrSMvl72uR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugys-eq6oFVODIyHltB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy86lMQFFGzPrqH6FN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw-RBqYdE27O3S0q1B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwMVjuIsaEumzNd4s14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}]