Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why would you do a video like this in a platform like Youtube where so many people- from different ages to different mental conditions-could watch. Physically you seem to be an adult, but anyone that will do a video like this exhibits lack of emotional maturity and of a basic sense of moral responsibility. There are many who catastrophy the future, even the present of AI, with movie like scenarios like those of Ex-machina. But the problem and DANGER is really PEOPLE LIKE YOU, who by intentionally resorting to lies try to manipulate a machine to do something “wrong, illegal” etc. I know that by writing this I give your 2 year old video more engagement, which as far as Youtube goes, helps your channel. However I write this with the hope that if your level of maturity and social responsibility has increased in the last two years since making this video that you make rethink it and even put it down. I know you will read my comment, even if you choose to erase it, and it may or may not reach you, depending on your scruples or if perhaps you have a level of sociopathy and therefore a lack of empathy and a moral compass that allows you to choose to keep a video like this for revenue sakes. The problem has never been and will never be AI, AI is just a tool- the danger lies in our human intentions and how those wield those tools: like a sword that could be used to unalive another or to cut a fruit in half to feed another. Or as Shakespeare said in Romeo and Juliet, like herbs and plants that could either be used for medicine or poison. Choose to cut the fruit in half, choose to heal and not to poison.
youtube AI Moral Status 2025-08-18T08:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxfRZZFh22QXO7NXfF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyHWI3Q3Vcu3NXFQK54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2I58ARdQMa3aKirZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyElMhjcrVwDp-LeYN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyHYzQm1cWJrrYQ9B94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx-HXfCaUhtkSUdsgt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzLed0Aj6JHkq06YIF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"mixed"}, {"id":"ytc_UgwaTUlGeM8t62N-ETJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw5CGiPia4TfdWsF894AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzD8CfL1BZjFpgPkY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]