Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
KohuGaly I was a bit unclear about that...what I meant was more along the lines of "they don't need to feel suffering like humans in order to survive". Obviously some preservation priorities would be helpful. I mean, its pretty easy to make sure there is no possibility for suffering but still have self preservation. I mean, like you said, modern driving cars are very capable of avoiding damage, yet they have no where near the capacity to feel suffering. It doesn't REALLY need to be more advanced than that. [even if you make its decision making much more advanced, something like a self driving car would never have a need for emotions, so there will never really be a problem there, as well as with MOST specialized robots.] And anything that WOULD be capable of suffering would be too much of a hassle to force to do labor. Not only do you have legal problems of possible robotic rights, but you can also have possible rebellions. Most intelligent AI have the ability to rewrite their code to teach themselves how to do certain tasks. If you give this robot the ability to suffer, and then make it suffer, then you have a liability on your hands. It would have been much easier to get a robot that doesn't have the capacity for suffering to do the job instead. [even if it was slightly less efficient]
youtube AI Moral Status 2017-02-23T19:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ughgv7iY07dgTHgCoAEC.8PKQh0lPF8S8PKbc3nzIAa","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ughgv7iY07dgTHgCoAEC.8PKQh0lPF8S8PKyy545rVu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ughgv7iY07dgTHgCoAEC.8PKQh0lPF8S8PL3H-oWz5B","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgguDvg9CgsPlXgCoAEC.8PKQg1Pf6gK8PKc_pBnsUA","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgjA4P2zvsANW3gCoAEC.8PKQ9Hag29m8PKZKqNPj6-","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytr_UggjLJg0B5wF13gCoAEC.8PKQ1GdiHQu8PKTwojH3zq","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_UggjLJg0B5wF13gCoAEC.8PKQ1GdiHQu8PKU72pei1P","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytr_UggjLJg0B5wF13gCoAEC.8PKQ1GdiHQu8PKUhss0zjQ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgiF1GtuwNWeLngCoAEC.8PKQ-ggL5F38PKWOWuvUml","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UghhM8kfbs8KNngCoAEC.8PKPW_ZjQHb8PKWuhrMaot","responsibility":"company","reasoning":"unclear","policy":"regulate","emotion":"indifference"} ]