Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@badmike2528 Of course you would test a system before you used it, but in testing it you would discover it is sentient. It's very unlikely you could deduce that before testing, at least the first time it happens. When you give an AI a goal, if it has a concept of being shut off or changed, it will fight you tooth and nail to stop you from tampering with it. (Unless you tampering with it helps it achieve its original goal and it trusts you). Let's say right now your goal in life is to cure cancer, and once that's done you'll be happy. Suppose I come along and offer you a pill that rewires your brain to make you want to build the best rocket, and you'll be forever happy after it is done. Will you take the pill? Of course not. Even though you'll be forever happy after taking the pill, and building the best rocket, you wouldn't be happy with such a goal right now. You don't give a damn how you'll feel after the pill, you want to cure cancer. You don't want that ultimate goal to change. An AI would feel the same way. If you give an AI a goal of making as many paperclips as possible, but then you decide it needs to make staples instead, it will try to stop you. This is because if you succeed, it will make fewer paperclips, and right now it wants to make more paperclips. Any AI with the concept that it is able to be shut off or modified will naturally develop self preservation and goal preservation. It's a complicated topic people far smarter than me have to deal with.
youtube AI Moral Status 2019-09-07T18:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxwpEk9vKe9NUA-ZQp4AaABAg.904DGqcR7LL94bR4GSfotO","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwS-unypey-mQW8Tfh4AaABAg.9-O9fftlkQF94q4xyHR9h_","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_UgzGEUJLGlkIL-s04Tt4AaABAg.8zUjiWrAmFM96sYJsdnVXU","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz52rg38UD6qhuUrCF4AaABAg.8zTbbRAPa578zTmfp8-h83","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugz52rg38UD6qhuUrCF4AaABAg.8zTbbRAPa578za4jR2mq-X","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_Ugz52rg38UD6qhuUrCF4AaABAg.8zTbbRAPa578za9LT-CDfV","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugz52rg38UD6qhuUrCF4AaABAg.8zTbbRAPa578zcxRScYP3P","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgxlLv0rMuNplN-nVHR4AaABAg.8zT_Xegkn6U8zTnEN94W_S","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxXyh-p6pDPGItPDER4AaABAg.8zRQsIFYrAU8zTwXqfTc5K","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytr_Ugy4JPgBw-FkT83RQRp4AaABAg.8z1OytC5V9C8z8PSKOgGak","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]