Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we were to make a robot that would be capable of conscious thinking like a human, have wishes, needs (like a need for exploration or need for company - it would make good constricting device as well, if for example some needs could only be satisfied with the help of a human), then it would be preferable to give it rights on the level of a human, but such AI should be as rare as possible. For most jobs we would use robots for, sufficient level of programming would be enough, no need for a thinking, feeling machine to for example harvest corn in the field. Even human interactions heavy jobs like waiter or servant could be made possible with machine learning of today, when only proper databases for voice recognition and understanding commands will be ready. Fully conscious AI should be only made for the purpose of high level decision making, research or if it would work in the conditions that weren't previously explored and which can be suprising (like a spaceship's AI).
youtube AI Moral Status 2020-08-14T08:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy6dKYY2Pz1ySbziSd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyDO6H8_ZXfOnNhhgJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwq09tVrSm_tiDh4mZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzUKA_GYgqqlc5RyiR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgwpDFadINPM-rpu6Nt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz-o_rLhv4UI8ynRsZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx3y5eMySIp2aUQLNN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxIbVfqQ-S41-cOoAR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzcGhOARlbguAAPRAF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyqP4EfvDqt1tq0uyV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]