Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Interesting topic. I don't think the point about needing to "artificially program pain" is very valid though. Pain is just a mechanism that a sufficiently complicated system uses to avoid harm to itself, to avoid dysfunction. If the robot's system is complicated enough such that it can autonomously function, then of course it would want to avoid incidents that would harm its own survival, its continuing of functionalities. It makes no sense to say it would not feel a thing nor vehemently object, if you try to slice it in half and destroy it once and for all. The same goes for "consciousness". The whole formulation of "consciousness" is just extremely human-centric and melodramatic. If a system is complicated enough, it will function (and potentially plan, devise, etc.) in ways as to achieve certain goals and avoid certain disadvantageous situations to itself. I'm not sure "consciousness" is anything more than that. Essentially, every system is trying to act and maximize its own utilities, be it one single human individual, one single robot, a group of people, or even a cell etc. Therefore there's bound to be conflicts about "rights" or whatnot. I guess in practical terms what needs to be worked out is a solution where each party makes a certain degree of concession, so that the utility in total is maximized. This is basically what has been happening throughout the human history, provided that one party isn't wiped out already.
youtube AI Moral Status 2018-02-09T19:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwqkL-4OotpcMC9_cR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyXbNc2Hg4arHEEMix4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyTRxu16GnzRrXp1qZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxD02SVCI74OGI37YR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzY76BRGYbG9Jo_FCl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwP4SnuGVfld-YbbIN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxbEeUsJVf28uGVuYR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxrDFHsHJ3ivixM8Ih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxedI_ziLJnucOS3Rh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgycamVqfwL1zJuN_DV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"} ]