Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
+En Sabah Nur See... I'm not sure I get your point here. Because if something was made to _perfectly_ do everything a human can do, why would it not be classified as something at least close to a human? Is it because of the materials it's made of? Something that was capable of liking things, hating things, feeling pain, can make decisions, can form opinions, and thinks independently. At that point, what exactly is the reason that it wouldn't have rights? Sure, it may have at least been made by a human (this doesn't even need to be the case, maybe it could've been built by an AI that makes things smarter than itself), but there is a line between artificial emotions and real emotions. So what if we do cross that line and make something that pretty much has identical real emotions to a human? Because let's face it, emotions _are still_ a bunch of reactions in our brain. Sure, we can not make something 'perfectly' do everything a human can do today, but this is a discussion about the distant future when we maybe could. Would it be fair to not give things rights no matter how intelligent they are because of what they're made of?
youtube AI Moral Status 2018-06-06T13:5… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugwu5KhSaSD6YSzpwd54AaABAg.8d2l9dvkSDF8f4V-dvb97i","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugwu5KhSaSD6YSzpwd54AaABAg.8d2l9dvkSDF8ggaA11nFu0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgwuZTliD7bFdRyGLiZ4AaABAg.8czMc5K6n8J8gKBo-emT6r","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgwzcZqTkUkaUeUEli14AaABAg.8cTBjju4NAb8h4n_tLsYjI","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugwd-9mBgZs782Dh9bB4AaABAg.8c80Ghr48Iy8hKMhG34Icp","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgybUCpqkN7hOtnEHRV4AaABAg.8c2bQm84IGa8h9OiI7KABw","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgybUCpqkN7hOtnEHRV4AaABAg.8c2bQm84IGa8h9iyJdLui8","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgybUCpqkN7hOtnEHRV4AaABAg.8c2bQm84IGa8h9nj_vXFDW","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgxJYALxNENSp0wpy3p4AaABAg.8c-ZtZONrCH8d9WcM2zut-","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxMTW8h5zCmqaA2m254AaABAg.8_wmKjzVKj68b8Gvtshv6a","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]