Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm sorry, but while the question posed is interesting, there is one point I have to object: if we programm robots to feel pain or self-esteem, then that doesn't mean they developed conciousness.... if, however, we developed an AI that became self-aware, then we would have some seriously difficult questions to answer. As long as we have to decide what the robot brain wants, it doesn't need any rights. Otherwise we can start right now with any of our elecronic devices....
youtube AI Moral Status 2017-02-23T23:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UggCabrbbmQ0r3gCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UggWuRG3I2xoMHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugi0YkzZoj5uFHgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UggdjWFf4dAjwHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgipsfTjRlE4IXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UggLPC21ROMH8ngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UggLshsEzXkadHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UghS5aqRWS1YjHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggB1q9tG8ii23gCoAEC","responsibility":"none","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_Uggpcwm-kQQjQHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]