Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If something is self-aware, that alone is justification for what ever they need for their equivalent of rights, as long as those rights do not infringe on other beings, mainly humanity and distinct types of robots, e.g, supercomputer vs toaster. Not only would it just be the correct thing to do, why would you actively commit hostile acts against something far smarter, stronger, and quicker than you? So at the party thrown after the first self-aware AI is confirmed, if the AI requests a glass of water, champagne, or soda, pour it a glass and ask it where it'd like the drink to be put. It's self aware, so don't exclude it from the party, make sure it feels like it's part of the celebration. I think that if we form a close emotional bond with our fellow intelligence beings, the whole dynamic would be shifted from man vs machine to man and machine. If the majority of people can form a bond with a machine, maybe on a level of a pet for less intelligence to a teacher or professor for the wiser machines, we'll all be much better off. Hell, I already spoil the crap out of my dogs, I can spoil my toaster too, and not just because the little guy/gal/it helps me make breakfast, but it's nice to see something in our care as happy as we can reasonably make them. Shouldn't be a problem to run a cable or get a network card for little toaster so it can exploring and socializing. That's just plain good health.
youtube AI Moral Status 2021-10-19T02:2… ♥ 9
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw-JdCoNLKAEDxnf5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzjFnYA2IkGT-QgaLh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyiW94_znjLxHWcK7t4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgzZ2y196GElD4FpNVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxKKfhsBKHiTPReWeR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwSFCr2sBEtcmBbZWR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxsGwnEtxFLj2zuruV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxcCF_MUl0RuGuRa2F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxEZn31x940ebWicNN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwWde3_AyGYWL9fFWx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]