Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is highly unlikely that a robot that just does a task [such as hard labor, or some specialized task] will EVER need something like "the ability to feel pain". Humans [and other animals] developed this trait due to natural selection, since it was the most likely trait to survive. Robots don't need this "to survive" thing programmed into them because if they were to be broken or destroyed, all you would do is replace them. This means that even IF robots one day are capable of emotions, this still means the VAST majority of robots on the planet would not have them. Any robot deserving of rights would not be a good robot to force to do labor, so the robots doing labor will not have the capacity for suffering so we don't have to worry about it. As for robots that DO have that ability, then yes, I agree that they should have rights.
youtube AI Moral Status 2017-02-23T14:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UggxBv6Bh68AOXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugi6g4FkM0SElXgCoAEC","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugh1j66C9k7XO3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugik2MV5JbWHtXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UghtsfO07MMnfHgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgiTS2v4li_yF3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugj5BBXR8r_1EXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ughby7Ihz3l8n3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgggfKyYxs8w4HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ughgv7iY07dgTHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]