Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hey there, first of all. You guys greats videos, this one I think is not so good. I'm doing a Ph.D. in IA. Such thing like a machine in power will never occur. Only occur if we want that to happen, I mean, in the same way, they don't need to programming feelings, they don't need to have "THE POWER".In all Science and Theories we have something called "Axioms", and those axioms give you the power of think (It never change, It cant change ( theory of logic ), and they will be programming with that purpose). Eg. We never put pain in the programming of a machine because doesn't need it. Can someone program a computer that can be evil? ( Do harm to people? Yes, but only thinking about that purpose. ) In this case is like everything bad that bad people have created ( guns, bombs, etc. ) Enterprise, The state, other organizations have the processing power to make ( another IA robot with more intelligence that the VIRUS IA? Yes, and everything will be ok.) The ads of that some bad IA conquer the world are lower than the human race disappears the next year. Sorry my writing Im in a rush
youtube AI Moral Status 2017-02-24T08:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgiE2bWZYr8kongCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugh9P4_eKrwalngCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_UgjskTMISraJQHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugjk0Y6xD83-EXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UggoA2NeSFr0UngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"liability","emotion":"outrage"},{"id":"ytc_Ughv1SUQ9LodhHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgiCljP_01nGF3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"},{"id":"ytc_UggKSrHxgQCsDngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugh9iU4V9rY4tngCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_UgiUX4IZv6JGangCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]