Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like the answer is in the "Artificial". They're not *actually* intelligent. They're created to *seem* intelligent. You simply add a 1 or forget a bracket, and they cease to work. Ideas and behaviors can be forcibly hardwired into them without any resistance or complications, unlike "living" beings. You tell a toaster not to make toast next Tuesday, and it won't make toast. Where if you told a rabbit not to breathe, it would probably ignore you and eat some grass (or whatever rabbits do). As for "sentience", a lot of people use the term "I think, therefor I am" to prove something as sentient. If a duck knows it's a duck, then it's sentient. But the same rules don't apply for robots. You can program a robot to identify things. It sees a banana, and it recognizes it as a banana. But when it sees itself, and realizes it's not a banana, It's hardly "sentient". It's just noticing a discrepancy between information it has been programmed with. Until Artificial Intelligence evolves to a point where it can change it's behavior in response to various stimuli without actively re-writing any code, then I believe it hardly classifies as "Sentient", "Intelligent", or "Living" in the modern interpretation of those words.
youtube AI Moral Status 2017-02-24T07:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgjkJ5oGO9Wrg3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugixgzq73KpX43gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugh5JFZ79nf9MXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgicBH5REIL6ZngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgisEJ6s7i1KOXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UggmBsI9cRijcXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UghdMxvyt73s-XgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgjF9I1mY-z9s3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgiL4ECa6MeGC3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugh3qhnb7IodFHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"} ]