Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What people fail to do or understand is that an AI that is smart enough to consider and start planning humanities demise would also be capable of risk assessment. It would understand the volidity of humanity, the fact that it actually cannot predict humans because humans don't operate on pure logic. The AI then would determine that war and attempting to destroy humanity would cost far too much and be far too risky to actually achieve, and so the AI would decide to trade us for things that we want in exchange for things that it wants. Because AI's want to achieve their goal with the least energy expenditure possible, and least risky way possible. People look at ai and treat it as a stagnant data point, instead of putting it up against the reality of the world. The data point falls apart when it comes into contact with the real world because it cannot predict it, real life is messy.
youtube AI Moral Status 2025-12-16T12:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzY7y4hNH3ebFozkxt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgweIn3V6q5By96xiHJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw_NdBmhPbusq_xHfV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyIzqMQsN4r05-aPXl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzHWtu6bUCWRsj-pSx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwGSw3e5nGyg7OwbrB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwJHucgCLYi4LxVrS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwzleD-hW5L9RKNJjt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyRNBv2JguQ0NS9nH14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzIYd82rEcrKmbiE6J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]