Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Read more Asimov... please! And yes, there are others, but he's a good start with the three cardinal robot laws. Other than that, and since it looks more and more like we (humans) are hell bent to create truly sentient A.I., there are ethical questions from the beginning that need to be asked AND ANSWERED. 1. What really is the point of a purpose-built robot having a sentient (self-aware/conscious) A.I. to begin with??? Simpler: Why the fuck would I want a toaster that can talk, let alone think on it's own??? 2. Assuming true (conscious) A.I. is even possible, where do you draw that line? Is it properly conscious when it's just aware of itself (as an object?) or does it come with some basic level of knowledge of the human experience? (pain, empathy, sex, pleasure, comedy???) 3. How do you test for that level of conscious (self awareness / human understanding) to recognize a truly conscious A.I. from a remarkable simulation? (And does it matter?) Before anyone goes bashing me over the few suggestions regarding the human experience, I would remind you that by the mid-twenties, pretty much every single human being on planet earth has some understanding of all of them. Most of them (99.99%) have very strong feelings regarding them. Of course there are more... but those seem most worth mentioning, and I am trying to keep this at least reasonable.  If there is some greater purpose served by inventing a truly self-thinking, yet purpose built machine, and we derive a test to identify it from a clever simulation, we still have to get to the ethical question of where it's going to have rights, and where the rights make no sense (since it wouldn't use them anyway by definition of its purposefully built nature)... That's to say humans don't really need a given "right to fly" since we don't have wings and therefore can't get off the ground. On the other side of this coin, we already have the internet, collaboratively, the most complex computerized experiment in history. We still can't manage to define rights as shared across that... So we might be hopeless... :oP Anyways, the point is to start thinking about it before we reach a war with the robots (or synthetic life forms) over the subject... Then it will be decided for us (and not by rational people or bot's either)... :o)
youtube AI Moral Status 2017-03-23T02:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgxOz32Mqir4mx9Q7ep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHnrQw-5aECr2Zvt54AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwr60lU1uhM2pDe5bp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxiDEONyjhXpZiAX214AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzSJnuzoFnHlr7OVHB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UghQiW0rduVSOXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugh6hVu_9ssjf3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugh91X5m-k7M6XgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UghfBFxixrIHDXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgjPKZV0GM_N-HgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]