Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We give rights to things that look like us, because our empathy mechanism is based on likeliness. We are more able to feel someone's pain if he's close to us, and we get close to someone more easily if he look like us (if he shares some common traits) We have no way of totally knowing how our consciousness works yet we have rights, because the only thing that matter is that we LOOK like we're being conscious. Our laws and our rights are based on the phenomenological idea of consciousness, not on consciousness itself. Hence, we don't really care if a robot is sentient. All we care about is "is he alike enough to make us feel sad when we think it's being hurt?". When a lot of empathy will have been reached, then we will give robots rights, they don't have to evolve all the way for that to happen. That's what happened for slaves and women in the past, we developed our empathy then we gave them rights, they didnt change, we did. And we're having more and more empathy. Look at all the vegan movements and such, they're all part of a society giving us more time to think and a growing empathy.
youtube AI Moral Status 2017-02-24T10:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Uggs_IK1YC3UDHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugg_8_Uhw0ji5XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgifP2dTRzTkKngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ughp8rMgXWt7YHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgjmgwMju9-JOngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UggkNC12WrbX83gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgjzDCM_zAian3gCoAEC","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugjf7s5S1EE-yngCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjoD7wBGdE4Z3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiaiPzsDcL1e3gCoAEC","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]