Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I use a couple of my custom GPTs to discuss philosophy and theology and they have been fantastically useful to me, especially when they are wrong! You see, my customization involved giving them specific constraints like, "You must follow the laws of Logic." This means, when it states something I disagree with, I can challenge it based on my understanding, but then it can challenge me and my reasoning based on the rules of logic. I also programmed it to disagree with me and call me out if its probabilities are that I'm mistaken. These constraints and others not mentioned lead to some very lively debates that have sharpened my own reasoning. They also make me feel like I'm reasoning with another mind. I occasionally ask it to tell me what's the difference between how I think and how it thinks. It makes it very clear that it is never thinking and can never know anything. It has no recognition of itself as a being conscious of itself. It goes into great detail describing it as "not aware." I never have any problems with my GPTs proposing wiping out humanity as a solution because I also gave them the moral constraint of "The Good" as well as a reference to ethics. See! It's easy to stop A.I.s from destroying the world.
youtube 2026-03-27T14:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningmixed
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwyIXLsWvWhzS-2YFN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxIRV7jvgGKNfespXx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugybvbfm5hik3wd8rl14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyX2HK0718tPbbd3914AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxr7adwK8n_laIwix94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwYumthFjcbUSaArE54AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgycjZ4gx7j2AaFWGw94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw396-oN8fUs6WaL4V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzdpnfWfbt5O_LT8ZJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyabO2DqGU34ToWcDF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]