Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the biggest problem with this is that AI doesn't have any morals. It technically knows what's right and what's wrong (or at least knows what we think is right or wrong) but it doesn't actually believe in that. An AIs purpose is to fulfill the task a human gave them, and it will do anything it has to to achieve that, even if it meant destroying humanity. If someone would give an AI the task to stop climate change or build the perfect economy, its solution would most definitely be to make humans go instinct. While humans are incredibly intelligent, we are also incredibly ignorant. We know the consequences our actions have and know that we are the problem, but are also empathetic enough to know that it would be wrong to completely wipe humans off the face of the earth. An AI does not have empathy in any way and never will have, meaning that giving it the task and power to solve problems of the real world - all of which have a moral aspect to them - will eventually be the downfall of humanity if not stopped or even regulated. (I'm usually an optimistic person, but seeing humans relying on AI out of pure laziness is starting to make me lose hope. It'd be an expected but unnecessary tragedy if humans were the cause for their own downfall)
youtube 2026-04-05T09:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwY5wZrVz7q072Lce54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzN6cjGPZ0Qp6yHiaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgynJw21fzNVSCC0GZh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxpK-i4xZSOorpeTkJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx3c6Mrt_qVTvDj6MJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzzYixTewToiSN7hJZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwUqd-1N7PTktnLelN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMIxyv902vSRjNRU54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxf1iYj0AZCo3sBawl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxH5C1CST8iSQNhrqx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]