Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> What about algorithms that make life and death decisions? So here's an example of where we let an AI make life and death decisions with no human override ability: the Maeslantkering storm barrier here in the Netherlands. It will *only* close if the system decides it has to do so. Humans don't get to press the button to close the barrier and there is no override. Why did we decide to do this? Because *humans make mistakes*. A human might get anxious and close the barrier too soon, costing millions or even billions of lost revenue in the port of Rotterdam. A human might also do the reverse, and keep it open too long, resulting in a cost paid in lives. The AI system on the other hand, does not make these kinds of mistakes. It is constantly producing a forecast model based on numerous data inputs and bases its decisions purely on objective science and fact. The only input we as humans have is telling the system at what percentage of flood risk it should close the barrier, but it is still the AI that determines that risk and makes the final decision. The error rate of a human operator will be orders of magnitude greater than that of the AI; and that is *unacceptable* when we are talking about matters of life and death. Now, that isn't to say you don't have a point; algorithms can have human-inserted biases, human inserted bugs in the code. But these are *human* mistakes, and they are not intrinsic features of AI. *Every* human makes mistakes, but a program just does what it is programmed (or what it has learned) to do. Thus there is no fundamental objection to trusting AI with life or death decision, it just comes down to whether or not it decision-making process hits a better success rate than that of humans.
reddit AI Responsibility 1606055400.0 ♥ 121
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_gd9ae7h","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_gd8bo12","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_gd7gb4h","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_gd7yeih","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_gd81phx","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]