Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Asimov Laws: aka 3 laws of robotics 1 A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2 A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. If a robot producer cannot absolutely guarantee that these laws are 100% built in the very core of his robots, the law enforcement institutions should prohibit him creating robots. This is the interest of the entire human race. I understand that the development of this robot is in a premature phase, but what we heard here is very scaring, and it gives a very bad reputation for the producer. Unfortunately human nature is a factor here, so we may prepare to see robots actually made to destroy humans.
youtube AI Moral Status 2016-05-03T16:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgiWMHgExCirhXgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugj49_6vw7yjgngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UghCAFOXmiIjeHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgjJxkMDZZq-_XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugiaitc4_ZrS33gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugheq5RGy67-sXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UggcfE3mEG2EbXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UghhIAZwlw_3AXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugg6JbEau3pZD3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgiwagGOMnP-vXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]