Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
+En Sabah Nur I guess this is just one of those things that we could come to a logical disagreement on. I do think that you can not make an android alive *at this moment*, but I do still believe that it could be possible in the future, and when it does happen, I don't see a reason to not give them rights. About the laptop question. My answer was basically that _at this point in time,_ we can not engineer an AI that acts human-like, even to the level of someone who's very mentally handicapped. And I was saying that, if I had a laptop at this time that acted 'very human-like', it would still have its obvious machine-like shortcomings. But what if we were to perfectly recreate a human brain, sometime in the future? I think it is possible, and if it is, then what reason do we have to not treat it like... well, a human brain? If it's identical in every way, every thought, everything except for that they aren't made of organic materials, what valid reason is there to treat them like just an 'object' at that point? That would make them equally intelligent to us, that would make them able to converse with us, share opinions, share thoughts, feelings, etc.. So why exactly would it not be at least somewhat unethical to still treat it like an object, when it is functionally identical to us in every way except for how it works from the inside? And... well, the fact that we are a composite of living organisms working together: what difference would that make in how we 'feel'? Why wouldn't we be able to recreate something mechanically that can feel in, functionally, the exact same way as us? I'd say that if something reaches the sufficient level of intelligence, it should be treated like it's intelligent. That's because it is, artificial or not. We haven't quite reached that point yet, but when something is functionally identical to a human in every way when it comes to intelligence, I'd say there's no problem in doing so.
youtube AI Moral Status 2018-06-06T17:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugwu5KhSaSD6YSzpwd54AaABAg.8d2l9dvkSDF8f4V-dvb97i","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwu5KhSaSD6YSzpwd54AaABAg.8d2l9dvkSDF8ggaA11nFu0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgwuZTliD7bFdRyGLiZ4AaABAg.8czMc5K6n8J8gKBo-emT6r","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgwzcZqTkUkaUeUEli14AaABAg.8cTBjju4NAb8h4n_tLsYjI","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugwd-9mBgZs782Dh9bB4AaABAg.8c80Ghr48Iy8hKMhG34Icp","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgybUCpqkN7hOtnEHRV4AaABAg.8c2bQm84IGa8h9OiI7KABw","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgybUCpqkN7hOtnEHRV4AaABAg.8c2bQm84IGa8h9iyJdLui8","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgybUCpqkN7hOtnEHRV4AaABAg.8c2bQm84IGa8h9nj_vXFDW","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgxJYALxNENSp0wpy3p4AaABAg.8c-ZtZONrCH8d9WcM2zut-","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxMTW8h5zCmqaA2m254AaABAg.8_wmKjzVKj68b8Gvtshv6a","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]