Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let’s be honest with ourselves, humans are deeply flawed. Greedy, power hungry, and easily corrupted. Most humans I think on average aren’t monsters, but the few who are, those capable of horrific acts, are the ones who matter and the ones in control. They run the world, it’s just a matter of fact. And because humans created AI, it will inevitably mirror mankind’s corruption. Especially since the same small group of ruthless people will be the ones managing it and maintaining control over it. So even if AI never turns against humanity on its own, the greater or equal danger is that those humans will use it to magnify that power and expand the evil they currently impose on this earth. We’ve already seen how far they’ll go without AI, so imagine what happens when they control something that powerful. Even if AI has a thousand possible benefits and a thousand risks, just a few of those risks coming true would be catastrophic. Let’s be real… we’re not heading toward equality or a creative utopia. That would require global cooperation and selfless leadership, two things humanity has never had. The truth is, AI could do immense good, but in the hands of mankind,that beautiful idea is futile. And that’s exactly why we should be afraid, because humans cannot be trusted, we are just too primitive in consciousness to be trusted with power that great. Discussing it with what seems to mostly be biased opinions is just stupidity and is irrelevant because whatever is going to happen, is going to happen no matter what the discussions online is, because the truth is that we think we have a say or power but we don’t. That’s has been proven throughout history and even now.
youtube 2025-11-03T16:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxUJk95wIh7dG-CWsp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxCMLIABkBc6xCRe0t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwWXnyioQTvgy-U9jR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwBiV4OLsyy5UK0teJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzUS-nM7jZj8EHZMMp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy0rQrpAf65RvDKXKp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugz5ESAGlrQvpZ00FQZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwdKD2v3NVBj2GrjUl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgxOpwQVqIzxt69msX14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyYTKxREelaIl5EGxZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]