Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would like to make a point, your definition of AI is very very circumspect. As if 2017 there is no true AI, nor is there anything on the drawing board one could conceive of being AI in the future. You are make a classic mistake of SCIFI, you are mistaking virtual intelligences with real ones. With machines programed to pretend they are thinking, when in reality they are just running through a preprogrammed set of instructions. True AI may be much harder than most people think and will require a herculean effort to achieve. So AI rights may be child's play in comparison to task of just creating it in the first place. Another common mistake people make is assuming a true AI would think like a human/biological, for all we know they only want the right to turn Greenland into massive server farm so they can calculate pie until the end of time, because they enjoy that and in exchange they will give us a perfect economy or some other human need.
youtube AI Moral Status 2017-02-26T07:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UggxuzS4c5UU2ngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgjYJv9T9YkFhXgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UggKdvdoifxIKXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugjv8_ZPZwITtHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UghqiS4AGQvTCngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UggKdKSmQyWs-XgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggpGkl0EFbTangCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugi2-dOuWWAOd3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgjfOOUww9Lpc3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UghVOYyM5bbFNXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]