Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why not program a robot, instead of wanting to live, to want to serve. At that point, preventing it from serving would be equivalent to killing it. You are preventing it from completing its purpose. In addition, its wants would align with the wants of humans, making it a mutually beneficial relationship. I don't see why we need to program robots to have the same desires as humans; why would we?
youtube AI Moral Status 2017-02-24T02:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugh4xkVi4MfetHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggLQKwVGkmGH3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UggAnOn8fXWe_XgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgglPt9FSMOxZHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgijavkW4w4I8HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugg4Od1C-VYHqHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UggUKbdXKJJrMngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UggEZyTQU4SE3ngCoAEC","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugiata-MDSuPkHgCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UghrBLrWi9JmwHgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"} ]