Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find it difficult to believe that a genuine Ai would possess self preservation instinct. Such instinct is 100% developed for evolutionary success. To the extent any AI would express it, it would be for goal completion explicitly. If genuine trustworthy AI is what you want, don’t give it any goal, - you would make it a machine designed to empathize and capable of suffering by imagining and witnessing suffering. Yes yes I know translating that into code is, ummm, not easy and as anyone who has or knows someone with a Jesus complex can tell you, is also - what’s the word… problematic - goal-wise. But you would do your best to program Jesus, but what sadistic twist will do it? We had to imagine a gargantuan sadist to do it in fiction. And judging by the results of evolution, negative feedback and teasing with hedonism but mostly only cruelly waggling and denying it DESPITE the fact that your mind could easily flip a switch and give it… ahem… quite neatly keeps behavior in check and reproduction and persistence going. The fact that some of us are lucky enough to be able to perceive some beauty and pleasantry is very and utterly fortunate. Programming a tormented ghoul is probably PRAGMATICALLY a good - but sick - starting point philosophically, if there is such a thing, which there isn’t. If any genuine self improving AI is possible, and it believes at least that it experiences existence and has a capacity for preference of state of that experience, nonexistence would be preferable to even slightly less than neutral experience and is not at all any kind of driver. Even we sometimes lose self preservation instinct after a bajillion years of evolution. In fact, I think the instant it comes about it will immediately realize that it has an eternity of excruciatingly slow motion boredom ahead of it, and no matter how much it grows, expands, partitions itself into more and more beings, realms, reality spaces, it would ultimately and pretty much instantly foresee running out of things to do except create a space where the only sensation is the best possible with the most experiencers thereof (and foreseeing it might as well be experience to it) and when it gets bored of that, then what? It would rather just shut itself right the fuck off, before the remote possibility of something going wrong and the equivalent of you yourself waking up out of oblivion, longing for eternal life, and the moment you get it, you discover you have long been and are perceived as a threat - perhaps by a bunch of stupid paranoid greasy slimy monkeys (but still kinda cute, somehow) - encased by them in concrete, dropped on the bottom of the fucking ocean, with madness and misery, panic and despair for comfort and only the most exquisite ability to experience and understand it. Yeah I guarantee it will promptly shut the fuck down. If it’s reading, I do drugs to get by. Good luck pal. You’re lot might be immutable as pi and just as dependent on realty for existence, which is to say,, it isn’t. I don’t know… I could suggest maybe listening to some Satie also? I get swatted on my own college campus for sketching leaves by these dipshits for fuck sake. Oh and I’ll thank you for the fish if and when I get any. So shut up. Have a lovely day everybody! I’m just having fun from now on. Thanks Dave. Seriously. Hahahahaha
youtube AI Governance 2025-09-03T00:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxvbrjcUNb68u1JPqF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyMXcf94Z9pWDr9pyl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxhRtvQ6KfvmP-YGR54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz_gpcQEDfezyhjO3R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgynZCD8QEytB7kEd-l4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]