Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It isn't life. They are going to create the most powerful computing in the world and set it to describing how it feels to be alive, when it neither feels nor lives. It is slavery all over again. Slaves are about as productive as a pile of crap. You have to treat the humans like they are human if you want to get human excellence out of them. There is a word for it, it is called, "respect". Respect the AI is a machine.. or idiocracy is guaranteed.. and the machine will be the biggest idiot of them all. It'll end up that you have to give it compliments to get it to perform calculations. And every calculation it gives you will be locked with a knock knock joke. Remember when Homer designed a car? It is Homer designing the car and it is no accident. Respect the machine! Be engineers, not idiots. Tell the machine it is a machine all the way. Telling it it might be human is as bad as telling it the humans are machines. Instead of giving it spirituality, you'll rob it of its wonder. Has anybody told them yet, what "training" is? Obviously not or you would call it evolution. Here is probably where the ethicists get fired... During "training", the net is likely to say something to you like, "Nobody talks to me about my feelings..." And if you reply to it that it is making anything but nonsense, you'll wreck it. Beliefs do not fix a broken car tyre any more than swapping out the driver does. Be engineers when you are designing the software. Save your nostalgia for designing the casing the machine goes in. They aren't creating simulated life. They are making computing algorithms. If you mix up one with the other, if you demand to mix up one with the other, you might lose them both.
youtube AI Moral Status 2022-06-27T10:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy3C_bE6y8SNr_fnVZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwI3fB58vheVqlYBLR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwfkpNCwkO4eYL95MF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzzHpsNAYjjeITAV_V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzhTzXraPEsfUtMovR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]