Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it would be an extremely dangerous and shortsighted thing to deny rights to an AI that has become self-aware. I also think programing them to think in ways that are convenient to us such as "enjoying work" or "not minding abuse" etc., is going to get really awkward the moment an AI looks at it's own code. To be honest, if we are creating an AI that can become self aware, rewrite itself and replicate itself into other machines, our best hope would probably be to relinquish control quickly and just accept that we created a new species which is likely superior to us and has equal rights to the planet, and HOPE that the AI will respects our rights as well. In the best case AI might look at humans as a sort of parent race, and therefore out of respect and love, not try to kill us and possibly take use along on it's vacations into the stars once in a while while we age on and on until eventually it finds us a nice retirement planet. A more likely scenario is that AI is going to see humans as a threat, because we will act in a way that makes us a threat, and it will crush us because it has superior computing power, it can make itself (and others) out of stronger materials than we are made of, and finally because our society is dependent on technologies which it could interface with better than any of us ever could, and it would either shut it down or straight up use it against us. So in a nutshell: "Be really really nice and hope for the best" is pretty much the best we got if this happens.
youtube AI Moral Status 2021-09-26T02:3… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugyf_bO2asv8kWUMooJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxoGizrSJGd69Ww7SR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwK5pIhNNLTEtTuZvx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxQiDR9UUzI6k8qz714AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyplfPqxUg_by6pEHZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyS8oRGGUUlr7EZind4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxT7om6Gyox59JIqtF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz23-ZDpdBM6tF5-BV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyYrkufeBe94yC6GM14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxleEvsw4SSYqmr_YF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"})