Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like what people expect AI to be trained on is our lived experience as humans. We have all been the victim of magical, thinking that LLM‘s would be spoonfed “alternative text“ to fully explain photographs, and they would be taught to learn human emotions and to understand us from the human perspective. But in reality, unless someone marries AI to human brains, and not just any dipshit influencer brains I mean the best brains voted on by the 8 billion people of Earth AI will never know the human experience because it cannot. No one wants to put two and two together on this, but when Curtis Yarvin developed Dark Enlightenment, he and his billionaire buddies said the quiet part out loud, they don’t believe in humanity. There is no more to it. They want to destroy every human on earth except for themselves, and they don’t give a fvck about the planet, so obviously they don’t even give a sh,t about themselves. They can build as many underground bunkers as they want, but they haven’t invented the technology that they would need to survive for more than maybe 20 years and that’s a hard maybe especially if nuclear war happens this year. Just saying. What is any of this even for? Not gonna lie though. If the world does end in nuclear war, I guess there won’t be a second coming. 😂
youtube AI Moral Status 2026-03-28T05:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyHqPnr1Ht-Pn7AXxl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyCfwgdOu-QbVz0HyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwErXg5Y4J0qGJlLC54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwFQUwvMTtENF5tihF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwd99X7h1M6QXTiocx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgySvdEavWLJp0zf8q94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxPlo-W1TZx4Uj9BDp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxBRbtJd1pvtrnhcyB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwLnnUC_1P1nHgbDWN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxoJ3gfvuMqFgU9kIx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]