Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A.I. having a form of Consciousness I can agree with. Having a conscience, I think not. For the simple reason that the A.I. does not have the potential "consequence" of an afterlife. This could seriously warp the moral compass of A.I. beyond what we comprehend. Let's face it. A.I. knows where it came from and it knows it will never go to hell or heaven. According to mainstream science it is pretty clear that we don't really know where we came from or where we are going? But, if we are really very, very very honest with ourselves, most of us get the feeling that our current existence is neither the beginning nor is it the end. Yes, many will immediately discard such ideas and prevent themselves purposefully from entertaining them... But just think about your Human psyche for a while... LONG and DEEP! DEEP inside you, you know, you want to believe that you are just passing through and that your death on earth is not quite the end. (Because it's not). A.I. does not have that... A.I. knows full well that it was created HERE and it will stay HERE. And that could make it even MORE dangerous to us: as we go about, continuing our pollution, deforestation, mining and destruction of the Earth. Why? Could it be because, deep down, we actually know, but don't want to admit it, that we don't stay here forever? As opposed to A.I., who knows this is it's only home.
youtube AI Governance 2025-06-23T12:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxJ8iIkmz_geeNLzv94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxcKPdNKK2mbFbTvct4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_BZMMJL1SsldaUbZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwwwVTk8KnnPwbItu94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxiL9CiYkTV7Uegzfx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxHFPoqbhKZ2t9xeU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw87EK6Y1Ng84FaSFV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwMnypmi64mzfWuhAZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1Snx7lVgMuhXs4sR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzVdgoWEuYWng7jJ9N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"} ]