Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When Claude code deletes your production database it’s your fault. When it will …
ytc_Ugx_nRJVo…
G
Honestly, I think Olovka pairs really well with these ChatGPT techniques. It hel…
ytc_Ugw4aEoKb…
G
Fucking stupid. What the hell kinda biases does a facial recognition system have…
ytc_Ugzy2FddA…
G
Since I seen that AI Manga tiktok filter, I have seen so many AI apps pop up in …
ytc_UgxbgIQye…
G
Keep making fun, that ai account will out sell the other artist that don't use i…
ytc_UgzaR8S9p…
G
There are already driverless subways and people don't care. And I dont recall th…
rdc_mr5a8pa
G
The development of A.I. only serves to illustrate the sheer stupidity of mankind…
ytc_UgxapQ8vt…
G
I think that's a very blunt statement, not true and it doesn't do justice to his…
ytr_UgzoWp_VU…
Comment
> What about algorithms that make life and death decisions?
So here's an example of where we let an AI make life and death decisions with no human override ability: the Maeslantkering storm barrier here in the Netherlands. It will *only* close if the system decides it has to do so. Humans don't get to press the button to close the barrier and there is no override.
Why did we decide to do this?
Because *humans make mistakes*. A human might get anxious and close the barrier too soon, costing millions or even billions of lost revenue in the port of Rotterdam. A human might also do the reverse, and keep it open too long, resulting in a cost paid in lives.
The AI system on the other hand, does not make these kinds of mistakes. It is constantly producing a forecast model based on numerous data inputs and bases its decisions purely on objective science and fact. The only input we as humans have is telling the system at what percentage of flood risk it should close the barrier, but it is still the AI that determines that risk and makes the final decision.
The error rate of a human operator will be orders of magnitude greater than that of the AI; and that is *unacceptable* when we are talking about matters of life and death.
Now, that isn't to say you don't have a point; algorithms can have human-inserted biases, human inserted bugs in the code.
But these are *human* mistakes, and they are not intrinsic features of AI. *Every* human makes mistakes, but a program just does what it is programmed (or what it has learned) to do. Thus there is no fundamental objection to trusting AI with life or death decision, it just comes down to whether or not it decision-making process hits a better success rate than that of humans.
reddit
AI Responsibility
1606055400.0
♥ 121
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_gd9ae7h","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_gd8bo12","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_gd7gb4h","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"rdc_gd7yeih","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_gd81phx","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]