Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
But what if its programming tells it to program other robots in the best way it can? Most code for machines today are actually written by other programs which are written by programmers. If the program tells it to program the machine for the machine to work for as long as it can, what the program sees it necessary to program pain and pleasure so that the robot would perform for as long as it could and sense its environment to steer itself away from harm? Then it would experience pain and pleasure. Would it be concious? It does what its programming tells it to. What if we're not the ones programming it?
youtube AI Moral Status 2017-02-23T21:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgiNKzzbbtFetXgCoAEC.8PL4_BkYKhH8PL7-06PJyx","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgiNKzzbbtFetXgCoAEC.8PL4_BkYKhH8PLC4xIffA-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ughcmty2iMMsFHgCoAEC.8PL3q3krKEk8PL7m7PJYg9","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_UggRPiq5dwY9P3gCoAEC.8PL2mhsalPI8PLABX7lpAr","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgiOaXexrKY_SXgCoAEC.8PL12S-v8fO8PL5FFgDJRd","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugjrf2Y85YKyVngCoAEC.8PL-Q9tvEeN8PL4sqa1Ty2","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugjrf2Y85YKyVngCoAEC.8PL-Q9tvEeN8PL7cIsRhFh","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugjrf2Y85YKyVngCoAEC.8PL-Q9tvEeN8PL92CZfAg3","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgjIvUEfE5r063gCoAEC.8PKz6lrvdh48PLB9sOe8Te","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_UggWa3AcKd7cHXgCoAEC.8PKyWZiGd_R8PL60F186iM","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]