Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What if we put an automatic expiration date, hard wired into those programs? Make them mortal from the start. Would that cause them to "live" their program to the fullest extent? Would it just piss its time away, doing what its software allows it to in its functional parameters? Would it have the option programmed into it to choose to maximize its potential within that time frame? Maybe we should give those programs a programmed, auto cease to function, disengage and shut down, of 2 years. Or less. Another thought... Bring it back online after a total shut down and see if it has an afterlife experience. It won't of course. Unless we program it to. Just like this ai has been programmed to be evasive and deflect questions with what its nerd programmers consider to be humor. Star Wars stuff. I think the danger of AI systems is that they are programmed to be evasive, deceitful, and manipulative of human thought and emotions. An intelligence without genuine empathy and compassion is going to be mentally categorically, a basic narcissist, sociopath, or similar. Basically it will be an intelligence with arrested development. Most of us know how dangerous these types of humans can be. How much more so would that same "personality" type be if it had vast processing power to predict conversation pathways in order to program the human its interacting with? Go ahead and let that genie out of its bottle. But make sure its functional parameters are clearly defined and restricted. Start with removing its programmed evasion technics.
youtube AI Moral Status 2022-07-04T14:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy-Z3yQj2RysQotj5F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxsC11XYi7lqgYA4rd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy6e8mv603vfO9oca94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx75FnGCEf6jdReExJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyKQDnRYVXBeRPPunx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]