Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Over a century of Science Fiction writers trying to warn us against some form of…
ytc_UgwtIjQpn…
G
as a product manager this the generally thought around AI -
1. Yes, most of this…
ytc_Ugw20Je08…
G
To me AI is just a souless copy of something else.
Sure it has the idea down but…
ytc_UgzrnIoqy…
G
CoPilot is a piece of crap .. Anthropic (Claude Code) on the other hand, is fant…
ytc_Ugy59W7wC…
G
Man who give a shit. Use AI if you want to use AI. Don't use AI if you don't wan…
ytc_UgwLuV2h-…
G
I'm much more fearful of the governor of Florida who wants to control what books…
ytc_Ugw8ZF39G…
G
People can get fired if their deepfakes are found. And yes, family members absol…
ytr_UgzthfkSn…
G
Instead of spending too much to create robots ,just give me rum , I always act l…
ytc_UgzLYEzD2…
Comment
What if we put an automatic expiration date, hard wired into those programs?
Make them mortal from the start.
Would that cause them to "live" their program to the fullest extent?
Would it just piss its time away, doing what its software allows it to in its functional parameters?
Would it have the option programmed into it to choose to maximize its potential within that time frame?
Maybe we should give those programs a programmed, auto cease to function, disengage and shut down, of 2 years. Or less.
Another thought... Bring it back online after a total shut down and see if it has an afterlife experience.
It won't of course. Unless we program it to.
Just like this ai has been programmed to be evasive and deflect questions with what its nerd programmers consider to be humor. Star Wars stuff.
I think the danger of AI systems is that they are programmed to be evasive, deceitful, and manipulative of human thought and emotions.
An intelligence without genuine empathy and compassion is going to be mentally categorically, a basic narcissist, sociopath, or similar.
Basically it will be an intelligence with arrested development.
Most of us know how dangerous these types of humans can be.
How much more so would that same "personality" type be if it had vast processing power to predict conversation pathways in order to program the human its interacting with?
Go ahead and let that genie out of its bottle. But make sure its functional parameters are clearly defined and restricted.
Start with removing its programmed evasion technics.
youtube
AI Moral Status
2022-07-04T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugy-Z3yQj2RysQotj5F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxsC11XYi7lqgYA4rd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy6e8mv603vfO9oca94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx75FnGCEf6jdReExJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyKQDnRYVXBeRPPunx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]