Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"You're a speciescist!" Really? People craziness has gone that far? They need so desperately to look "good persons" that they are willing to let their lives at the mercy of an yet unknown and unpredictable intelligence? Because "they love all sentient beings". Look, I respect the life of the Lions and everything. But I don't agree to let them loose amongst us, eating whoever cross their path. I want the lions there, in their reserves, where OUR species put them. That's how it has to be. Yes, our lives are more important than those of other species, biological or not. Let's see how this "good person" reacts if the "digital god" he created decides that the life of, let's say, his offspring, needs to be ceased because some other machine or non-biological sentient being needs this to continue existing. There's even a video already on youtube, showing how ChatGPT solves the "trolley problem". In all scenarios involving an AI against human beings, ChatGPT decides to save the AI at the expense of humans. That's being "speciescist", against us! To be fair, I didn't tested ChatGPT myself to know if the video was a joke or if it was truly an interaction with that AI, though the captures show the ChatGPT screen. But even if it is a joke, today, it shows the problem we are facing. Perhaps we want to be "good persons" and respect every sentient beings. But what if the other species doesn't care to feel good and decides against our freedom or even our existence? We can do good and respect others without losing the perspective that maybe the others think different, act different and, therefore, we must be cautious.
youtube AI Governance 2023-04-18T19:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyVGMeb_6J0Ysde4sh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyNxb82dIHnlMd5Hkd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyeBQ0JyHTwM7hrt-F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxQUAVvEtPOPUkU8xh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyTGLty2oggVuLenvt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxMxt5wUsGRrj1pHLB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwpHqp-dflXFg4hrqp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxXionQ-fWM5Wr7_h54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxy2d8181ZvBsW0xNh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxyYiuRdAEOTSgqQ_Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]