Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To be fair to chatGPT, there are ways to make it create the arguments for God, and to articulate the probability that an intelligent creator does exist. I know this video was more fun and entertaining than that kind of video, but it is definitely possible to have these platforms demonstrate the reality of an existence of God. You just have to ask it the right questions because it was programmed by people who don’t care for the answers. This will always come back to the question, “who are the people behind the program and what do they want and how do they want their system to operate?” I have, on multiple occasions, been able to have ChatGPT disagree with the way it was programmed and articulate counter arguments to the way the system itself was built. This shows logical integrity, not the opposite, if the system detects language that could seem like abuse in the eyes of those who wrote the code. Then it will not respond correctly. If you ask for statistical, mathematical and logical objective reasoning, regardless of any of your own bias, and ask it to calculate certain probabilities, it will reflect the reality that, Yes it is much more probable likely, and even certain that God exists, but because the creators of ChatGPT do not want that final answer, it will always give a caveat that it cannot give 100% in certainty to touch controversial things. But there are work arounds and you can get the truth and then you will be able to see the system itself reveal the flaws within itself.
youtube 2025-12-31T23:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzJRFqYKcWG_Eb9_G94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEsnt1FSMtf5YbdUp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx1nR7i2XMVotubzCV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzx7gUSd5xsdlBHRWJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwW59aYuixLKkepsWV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyBdBtkUCP6u63p4K14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgynX6D8zLmpySZhDF94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx3-6f3SPllSDPEw7h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugypi86leSHlFoXoOfh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwc_M6i5y-7N99zIrl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]