Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The strawman of course being that likening suffering inflicted on people and animals to an artifice of humanity. here are video games where character suffer fear, pain, and/or death, lust desire, envy, etc. They were programmed to experience these conditions, their existence is perceived through the virtual world, but none the less real to them as this toaster scenario. Turning off a video game is therefore genocide.   Then there is Westworld, which further depicts a bias against those that would "exploit" machines in much the way one might exploit a car at a demolition derby. If said car was a smart car and feared being damaged, is it then morally incorrect to use said car? Would entering said car without its prior consent be violating it?   Set aside being sentimental, see the artifice for what it is. I say it's fine to indulge he idea where it serve humanity, but where it serves AI or artificial life, we open ourselves to a needless moral conflict with NO positive outcome.   my 2 friggin' cents
youtube AI Moral Status 2019-12-02T19:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwHcfPn3nHXUcxrmBt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwcJZqaSv0UKINrT754AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzBqSrsoUNe3MPxdE54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz-B0YzdGDbBsGUTpJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyTocf-KcoBoYG2NHB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyi8KStfY-Nierx6fx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"mixed"}, {"id":"ytc_UgzJ_7ZSJxRPZ3fNNdx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwcUCxIfsflDUCIgJN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyXJSNsnAH2j8H6PD14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxKIoSO5YaVn5FW6qp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]