Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>This is a non-argument. Letting people die when organs are available is an arbitrary action of uncertain morality. Merely "copping out" isn't a satisfactory solution. Despite the fact that this was never an argument per se, just what I thought was an interesting observation, I don't see why this is a nonargument; it actually seems more elegant than the inhospital hospital in terms of fortune and reducing arbitrary action, as the random universe not man doles out the lottery. Plus, your statement "letting people die when organs are available" is pretty appalling considering a) those organs aren't *available* as they're in use for an autonomous person, and b) you're still killing people. >Right, if you think that killing is noninstrumentally wrong, then that is an answer to the proposal. But the state is really only putting someone at a risk of death, so you have to explain why we should treat this case differently than instituting a draft or hiring someone for a dangerous job. This is fundamentally different than a draft or hiring for a dangerous job. Both those examples need *consent*, consent to the social contract where the military is the fundamental force behind the state's keeping order (and most people don't die in the military whereas death here is certain), and consent to the dangerous job because you want money or whatever is offered. But even further, my argument is that that the state not only ought not have this authority to kill based on biopolitical governance, but also it would be proactively killing its citizenry despite contracted duties elsewhere. To digress a bit, this is why the entire notion of obligation is founded on negative not positive, where I don't have to help others, merely I can't proactively harm them, ie I don't have to save Sally's life, I just can't kill her. >First of all, this fear is unfounded because the current organ waitlist system works fine, without any of this hypothetical discrimination. We're talking about
reddit AI Moral Status 1402033853.0 ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-25T08:13:13.233606
Raw LLM Response
[ {"id":"rdc_cfkw04q","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_cfl560i","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_ch4nk0c","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"resignation"}, {"id":"rdc_ch4zdd0","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_ci0i07o","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]