Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am in the camp of "it's inevitable" The competitive edge provided by AI to any large company & nation that exploits AI is insurmountable even in its current form. We are no united as a species. We are constantly trying to 1-up each other. And as long as that is true, you can't be the first one to give up a clear advantage for the good of everyone unless there is a 100% chance that advantage will destroy you, because there IS a 100% chance that you will fall behind if you don't use that advantage. It's like nuclear bombs. Nuclear disarmament will never happen even though it would be in everyone's best interests. If you give up nuclear weapons and someone else keeps theirs, that immediately gives them the ability to conquer the world on a whim because the advantage provided by nukes is so great. AI offers he same thing, but using it to "win" doesn't even require going to war. And unlike nuclear bombs, there doesn't seem to be a point beyond which further development stops being practical. So he development of AI will continue. When is risks become more apparent, some parties may back out, but they will be made irrelevant by the parties that continue to push development. Even a global ban on AI by every major nation wouldn't stop it from being developed in rogue nations and in the skunk works of companies eager to keep their edge. If anything, that would lead to a lack of oversight, and more dangerous avenues of development.
youtube AI Moral Status 2025-11-13T04:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy--G3l4v4wdBDrdeN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxjVWGx-eqsWy1AwCd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxfyJzoP4g-3b1ApY54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwj_ncNPgfCZTiLHWd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyzqP9uCO7iUhYttz14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzqmx9erBuVnwsMJcp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx9TEUQxojq26nNepd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgymRBJgMGLu2ZrDc_h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwuG77G5GwsgIom9_t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxnTSM1SiX5jHO0oOl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"} ]