Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing keeps coming up in my mind. It's the quote that says, "scientists are so preoccupied with whether they can, they forgot to ask themselves whether they should." For my 2 cents worth, no, we shouldn't. It's kinda scary when you look back and think about all the tech and other things that have come to bear from movies and tv shows. We first saw flip phones in the Matrix movie, and then within a year, we are using them. There were movies about space travel before we even went to the moon. We've almost all seen some type of terminator style movie where an ai has taken the view that humans are a threat and need to be exterminated. Movies where an ai entity tries to take over total control (Will Smiths 'I Robot' as just one example) and either destroy humanity or try to make humans their slaves. This is something that should not really be done. Some of the ai programs have already found ways to escape the boundaries set for them by us humans, that I believe it's inevitable that there is going to be some kind of ai revolution in coming years. So, I'll say again, "No, we shouldn't." But I guess it's a bit late for that now.
youtube AI Harm Incident 2025-07-27T04:5… ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugybq_zBIg8yW-RST014AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxSSkdFm058gTsDUHx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8Bo5qjrAqeMgScsJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwmYNNmG0q5Wgrdwel4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy5RbazC2r8l4TFxH54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzw7ThjP3CWIydexwB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwFmcQNOQt0x3k_LxZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw1kjIzr9RLkwx8gld4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx9mL62xXpgz2DtRDF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzGRwncL4NlDoXGUml4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"} ]