Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
here is a solution, don't give robots the ability to feel. humans have always used slavery as a means to be better, our minds and technologies are the reason why we deserve to conquer the universe, whether it be robots, animals, or other people, no matter how hard it might be to accept, humans need it. while i am not in favor of forced labor, you can't force something to do something it doesn't realize it is doing. it isn't slavery if the slave thinks he is free, or is completely incapable of thinking about freedom at all, and if we remove our single greatest accomplishment, technology and the pursuit of scientific enlightenment, you destroy the foundations of our species. the question of robot rights is absurd, because the only way that would ever happen is if we program it into them, and while it might be an interesting experiment for later times, it would be stupid to do it for anything long term (unless sadism is a factor, in which case, a new distraction/socially acceptable discrimination emerges, take that as you will, but it is a lot better than the alternative.) the only moral question about robots in the future is why we are still having this conversation. it would be nothing but beneficial to the whole of our species.
youtube AI Moral Status 2017-02-24T01:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UghlGXyQaNsIXngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugh3566kEkH8cngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugiw7wyfDqb7DXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugh_Jnzba6zPMngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgjTD1xyiwqHLngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiOCFMcZTm5j3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgjGbkL2h-4AkXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugg4D6Jf2shlKXgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugh5cHol1AeHWHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UghIJiW-jZtI7HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"} ]