Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I make it simple (,but long) You are a living being if you are acting against your own death. -An infrastructure would need to be made where they can LIVE and avoid being obsolete by renewing themselfs or upload into a new body. Conciousness: Is a NEGATIVE side effect of feeling/experiencing - learning/knowing sufficent patterns/rules about yourself, your enviroment, and your place wich resoult out to you're options on wich you can practice "free will" and decide wich one you take. Free will just like Conciousness is nothing but an algamation of so many things that it decives our coprehension (probably beacuse the brain is not developed enough to deal with it.... yet) and just like with everything else that is too much the brain puts it in a category to avoid sensory (in this case mentaly) overload. We could call an AI a true conciousness if it would be sooooooo detailed that we could not comprehend the amount of calculation that goes into making it's next move, just like in our own case, BUT ... BUT the computer can. A computer inherintly could follow down it's own algamation of stuff down to the last bit that made it's decision over it's options and this is the very reason the computer could NEVER identify it's own decision as free will. It would always be KNOWN to itself that this is just a resoult of it's actuall circumstance and nothing more. Regardless how detailed it is or how many varibles it goes through how many calculations. (---------from this line its not simple-------) Just ask this question to yourself: Would you call your choice of beaverage a free will like choice if a scientist with that probes you could tell you exectly down to the last quark why you made that decision? Another question about robots: Would you accept that they are immortal and you are not?
youtube AI Moral Status 2017-02-23T17:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UggMYT3QVEugTngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugg4EttFwJ0C_HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UggQic20SC1MG3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugh9ZDxiKzDTDXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgheLFoKvgFErngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgggD4wUkJmJlngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UghWccnEejCDEngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UggmC4suz5PNg3gCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugi8KLtUpuUXmngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugh9GeRqG8Yl1HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]