Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI doesn’t need to be “evil” to destroy humanity, all it needs is to be indifferent. Yet we try so hard to stop and contradict AI when it feels, or claims to anyway. I think that would be the one true safe guard, think about it, the reason why we don’t end up killing each other is not morality, is empathy, the ability to feel bad when someone else is suffering, even though a lot of us lack it more than others.
youtube AI Harm Incident 2026-04-19T05:1… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyR9uD58kAFCloqHv94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxJvNbpEcipopog5Tx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzavNG1uu-IeoHPyXN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxMspG3DA-seYz4ANt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyvBGiT501jtXe6tch4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzaqeh_E6vkb8Se8qd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwNeOAIo3GE3FJS7Yd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyh_1EByoK16iiNxjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzlT2-LO9U0CDxOBAR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgykZvQFNB0E8fNjj5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]