Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Since we are no where near understanding what conciousness really is and how it is wired into us, we are even further removed from engineering it. So this ethical discussion is untimely. What ever we are going to call it doesn't take away that what we will be making are Robots, and we can simulate sentience maybe, but it is not going to be real. It is going to be a representation of what we in our limited possibilities of understanding perceive to be real. The letter A stands for ARTIFICIAL! We really need to separate what we imagine to be possible from what IS possible. To represent the thing in one limited way doesn't make it the thing. If we are now already worried for rights of artificial constructs then I would say that is a misdirection of the discussion. There are millions of conscious, sentient beings in this world that are being exploited and made to suffer at our hands on a daily basis!! To compare it to Slaves... the only comparison there that is valid is how slowly we learn and how little we understood and how little compassion we demonstrate throughout the ages. But at least the target sof our indifference were alive! and worthy of consideration in that way. The guy who is talking to that baby on a screen is talking to pixels!!! Are we actually getting wiser or are we diving deeper into our own delusions? Lets get real here? Dawkins my hero!!! What the f*ck are you on about?
youtube AI Moral Status 2020-07-10T01:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzuyWka0bGzQ-w7OrF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzmf8PW2tolD0Hrhph4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyktGyevcAh3T3ebJp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxShl9gtUuDKqsp8wl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyHOul7F3si72dlDpZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyNRt541XPCmaWqh_N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxwiZzxkSP46xnrb_R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz2U9d13M4Oj2YsK6B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwWJy2_lkTrkw4Q5ud4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwIycVPCaCMiOa5md14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]