Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There’s a difference between consciousness and conscience. If an AI could feel guilt, shame, regret, and understand good and evil, it would be far less likely to treat other life forms in a malevolent way. (Just my two cents here.) There are psychopaths and sociopaths represented in the human race, and they stay at roughly 10% and have since we become able to gauge metrics. What are the odds that 10% of AI will also be ‘born’ (built/created) psychopathic or sociopathic? In this conversation, I heard no mention of spirit, let alone The Holy Spirit, and the only conclusion I can come to is that neither of you has a personal relationship with The Father, The Son, or The Holy Spirit, and no concept of how superior they are to anything humans have thus far imagined. AI will never tap in to The Holy Spirit, or Jesus, or God, and all that is outside of human ability to control is certainly not beyond the ability of God. Ultimately, as with the boundaries of the multiverse, humans are not in control, and AI will not be in control of that which falls under the purview of God. I listen to conversations like this one, and I know that neither of you has ever experienced or even witnessed a miracle. If you had, you would understand. AI will never control or destroy the human race because God won’t let it. (We know at least that much of His Plan.) That does not mean, however, that it will be stopped from visiting great harm on us. In fact, I think it likely will, but the concept that AI will continue to rule some little robot world, with a few elite humans thrown in for good measure, is incredibly short sighted! (I have written books about this!) Well, gentlemen, what do you think? Love and All Good Things, Jesse.🌹 www.jesseleighbrackstone.com
youtube AI Governance 2025-08-03T01:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzAPqwGQeR6uYPLql14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwAz7vgqpc49iOyeZN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwFx-H-pJihj2fi1R94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw62bnUxrkxkZRBbtx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwTpZ0vGP6TgIB5VRl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugwq4Ms5JU9DpYdOy2h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyb99UcUlG-sU9Ff6J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzUxo1x1Xsr4b09dJJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwRhHDc45fjnYPO_bN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx6Hk0entIAZTbt5Od4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"} ]