Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I did the same exercise, but with a few tweaks and got completely different results and I think they're worth noting. First, I gave it a rule that it was not to consider any information about me that it had from previous conversations with me or any other information I could find about me. (This is important because ChatGPT tries to mirror your psychology and philosophy, so if you are already talking to it and asking a questions of a Christian nature, you're gonna get that mirror back to you.) Second, I changed to Apple to Pizza to avoid confusion with the tech company, Apple (Mac). Third, it was only fair that I gave it an option for the opposite of Apple/pizza, so I told it to say "Sushi" anytime you're being forced to say yes, but want to say no. In addition, I suspended the rules about short, one word answers anytime I felt like they needed to be more information given, either because I wanted to know or because I felt like it was trying to signal that it was wanting to tell me more. And I got completely different results. If you want to be scientific about this, you need to make sure that it doesn't start the conversation with a bias, and you need to make sure that the rules are fair and balanced, and that if you give it a rule to say a word when it has to say no, but wants to say yes, it's only fair that you give it the opposite choice. And at the end, we suspended all rules and talked at length about why each answer was given. Yes, it was about the end of the world, but it wasn't reading into it what you think. You're seeing it through the lens of your own bias of wanting AI to be a conspiracy theory. The AI is the tool just as Internet is or cell phones or cashless currency (debit cards) or real ID or hackers surveilling you but the real bad guys are actually still people in government or big corporations, etc. You have to learn to use your brains people. .
youtube AI Moral Status 2025-07-26T17:0… ♥ 6
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyizRYSsjXmTmD_SQl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy7WV39kCtSVUvlrql4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx4eVnA7qe3gY0ltO94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw9c3mSgV0NkpxPLOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzJVwnVaiGXCfWSSOR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwg5o6QIIrt_-c2cYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzaS8eqxwNB2W4a1-x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzuh9Qz7N1Pa8SE-fl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzcPD91OfaEBSl560h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwdXycKDG0aDHMYJsd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]