Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Development of AI is dangerous period! We are an imperfect species, we will create an imperfect AI. If it becomes sentient it will be able to access ALL information and see how wicked our species is (NOT everyone but many people are). It will eventually be used against people by means of intelligent weapons in war, police, Secret Services etc. People may not consider it alive or too have a soul but that won't change the fact with future technology an AI will be infinitely more intelligent than any human. What will happen if it turns out the AI has the personality or traits of a serial murderer? Or maybe the AI has access to everyones medical Information including Medical insurance (it's places in charge of that data) and it decides to falsely cancel your medical insurance causing you to die because you can't access health care anymore?? I haven't even started to talk about the AI's rights yet and just skimmed over a couple small points. I believe a Conscious AI should get the same rights as Everyone Else! Our Bodies are Biological Computers so I don't see any difference between us and an AI that's attained Consciousness, Period (Accept for a soul based on my religious beliefs)! They may or may not have an after life (be reborn - Christian believer) but that doesn't mean they are Not Alive! A pet dog 🐕, cat or other creature is alive. They're a biological computer vs a silicone, plastic, steal and aluminum based computer 🖥️ made by us humans.! You wouldn't claim your pet isn't Alive would you? So how's an AI that reaches consciousness any different (just because we made it from chemicals, plastics and Minerals people will say it's not alive). Everyone contains many of the same chemicals and minerals inside of them. Our soul is what makes us truly different from other creatures including our intelligence/minds, not what we are made of! I am exhausted now, later Everyone!
youtube AI Moral Status 2022-07-11T05:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw74Ifl-4fKYuGcliB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxhTKjJU0UyUOZiBcR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzswLGYbjmNbmEkeM54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyABrDRVNadxJnKoCR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzaeRpXfMo5QBTX9514AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"} ]