Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Are you contradicting yourself? Some humans are making AGI/ASI that will quite literally infringe upon all other people and all societies, and you're giving that ultra-infringement (and the likely ultra-violence that will come with it) a free pass while claiming that what really 'crosses the line' is when the victims say or do something that infringes upon the people who are conducting the large-scale infringement of the victims. Your stance comes off as, "How dare you infringe on those who are infringing on you; that's not allowed!" If some reckless people are having fun trying to shoot the hat off your head without killing you (and without asking your permission), and are thereby subjecting you to high risk of violent injury/death, then you absolutely have every right to use violence to get them to stop. And you're extra justified if those people are endangering not just you, but your kids, neighbors, and every other living creature on Earth. Your statement of "Violence crosses the line, period" only applies if nobody is under threat of violence. If people are under legit threat of violence (and especially existential violence), then it's bluntly false to claim that they're not allowed to use violence in self-defense. This is not a uniquely human conclusion: it is ubiquitous throughout the animal kingdom. Every animal feels it has a right to violent self-defense when its life (or the lives of its family) is being violently threatened. Feel free to aggressively stroll up to any mama bear with young cubs and check this for yourself. If it's true that the AI companies are creating things that cause great risk of violence toward the common person and their loved ones, then the common person is absolutely justified in using violence for self-defense. Just as cows and chickens would not be morally at fault if they were to storm the slaughterhouses and destroy the equipment.
youtube AI Governance 2025-05-21T17:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugzgm57jBiilQTuMkmh4AaABAg.AIOPyjPGyoVAIOYR606scY","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugzgm57jBiilQTuMkmh4AaABAg.AIOPyjPGyoVAIT3O5xdGNS","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugz_aYY_K34HF9TW2fZ4AaABAg.AIONHwx5AaAAITuh6dONpV","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugy1c50OJPFZdhhWQPN4AaABAg.AIOJiSQ-cjuAIOd_dTaSAU","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugy1c50OJPFZdhhWQPN4AaABAg.AIOJiSQ-cjuAIPgnFYT7Pb","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgxYinNoOGtKsqz9sQx4AaABAg.AIO7TMSpr0tAIOfXttp3Pi","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugw5wfFhZmWiGl45Gsh4AaABAg.AIO-dYN-3ZYAKW7q3p90Io","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwoGSxcV6Sb5gGI15V4AaABAg.AINpUmdZ8g1AIO-8VPHV1K","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwoGSxcV6Sb5gGI15V4AaABAg.AINpUmdZ8g1AIO8ihVGSZd","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwoGSxcV6Sb5gGI15V4AaABAg.AINpUmdZ8g1AIOAQipbyzu","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]