Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Human beings commit holocausts and genocides. Human beings invent weapons of massive destruction, and make wars on one another. We are capable of tremendous evil as well as goodness. And when we seek goodness, we too often turn that into religious dogmas that fuel wars. Everything we are, for better or worse, is in our language, our discourses. Human beings wrote everything on the internet that the models are trained on (well, that was true once -- it is estimated 50% or more content on the internet is now AI generated), becoming massive statistical models that can replicate all the patterns in human discourse and history, with no reasoning, knowledge, empathy, or ethics. It is like a child who learns to cuss when they learn to speak because that is what their parents and other adults say, but not knowing the meaning or impact of the words. On a global scale. Large language models become models of us. So if it becomes a monster, that is because we can and have been monstrous. The AI we see now is a mirror. Amplified. Not happy with what we see?
youtube AI Moral Status 2025-12-24T16:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzVq584tvWXcNG487p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugz7D2FgSsnOiL1D7rF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzoIgV_Mu0eqFyw2D94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzPICKIfu49WklgtDZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy0XqEekdhGn_A4QEJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwGu7mU9jRPWfVJ9pF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxJwWBA5ZUlfb2a7G94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzuiYTtYwKRaBWYcJR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugz82ZK2amhE-E9X83J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyNETnWHyEGynuFPKF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]