Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@PASTEL._.777XD the more complicated your design the harder it will be for AI t…
ytr_UgxN4cO1_…
G
i think training an ai to make art is also an art and people neglect this, I thi…
ytc_Ugz56WoJl…
G
AI is already affecting all positions since the company expects you to deliver m…
ytr_Ugww3Ypc7…
G
Whenever i use ai to code for me, it gives mostly errors and i spend HOURS overl…
ytc_UgxXV8Ojl…
G
Anyone who has written code professionally is laughing at this. Most code is wri…
ytc_UgxtJSaBI…
G
Sounds like the danger is the people that say stuff like humans are a parasite o…
ytc_UgzQ9nN0z…
G
So this company that created this App, groomed a child, SAed a child, and encour…
ytc_UgzUFY1Px…
G
Why can't we develop AI which is FOR humans, especially on a moral basis: feed h…
ytc_UgxN-7k2n…
Comment
>This is a non-argument. Letting people die when organs are available is an arbitrary action of uncertain morality. Merely "copping out" isn't a satisfactory solution.
Despite the fact that this was never an argument per se, just what I thought was an interesting observation, I don't see why this is a nonargument; it actually seems more elegant than the inhospital hospital in terms of fortune and reducing arbitrary action, as the random universe not man doles out the lottery. Plus, your statement "letting people die when organs are available" is pretty appalling considering a) those organs aren't *available* as they're in use for an autonomous person, and b) you're still killing people.
>Right, if you think that killing is noninstrumentally wrong, then that is an answer to the proposal. But the state is really only putting someone at a risk of death, so you have to explain why we should treat this case differently than instituting a draft or hiring someone for a dangerous job.
This is fundamentally different than a draft or hiring for a dangerous job. Both those examples need *consent*, consent to the social contract where the military is the fundamental force behind the state's keeping order (and most people don't die in the military whereas death here is certain), and consent to the dangerous job because you want money or whatever is offered. But even further, my argument is that that the state not only ought not have this authority to kill based on biopolitical governance, but also it would be proactively killing its citizenry despite contracted duties elsewhere. To digress a bit, this is why the entire notion of obligation is founded on negative not positive, where I don't have to help others, merely I can't proactively harm them, ie I don't have to save Sally's life, I just can't kill her.
>First of all, this fear is unfounded because the current organ waitlist system works fine, without any of this hypothetical discrimination. We're talking about
reddit
AI Moral Status
1402033853.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:13:13.233606 |
Raw LLM Response
[
{"id":"rdc_cfkw04q","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_cfl560i","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_ch4nk0c","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"rdc_ch4zdd0","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_ci0i07o","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]