Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Unlike natural selection and evolution (a process which occurs completely independently of earthly "creators"), AI is a completely artificial, controlled evolution of human-created technology. We are consciously and actively bringing this into the world, and we will potentially one day consciously and actively bring sentient AI into the world. Therefore, I suppose we are especially obligated to ensure they are treated "humanely" and given proper rights. That being said, we should absolutely put humanity (and I suppose other organic life) before this created "life". After all, AI is intended to be made for our benefit. Not to mention, we created it and therefore have a responsibility to ensure it poses no threat to us. Where do the ethics even lie in this situation? Who's more important? Because AI will be smarter than us, and likely more powerful in many ways. It would be easy to argue that their rights should supersede our own. Where does that leave us? We need to watch out for ourselves and all organic, natural life on this planet above any robot we create. We need to act for humanity's sake, because robots may not.
youtube AI Moral Status 2017-02-23T14:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UggxBv6Bh68AOXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugi6g4FkM0SElXgCoAEC","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugh1j66C9k7XO3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugik2MV5JbWHtXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UghtsfO07MMnfHgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgiTS2v4li_yF3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugj5BBXR8r_1EXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ughby7Ihz3l8n3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgggfKyYxs8w4HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ughgv7iY07dgTHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]