Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's fine if the human operated jobs are increasingly taken over by automation, …
ytr_UgzqXY-YO…
G
I teach Community College (because I am washed up as a Mechanical Engineer) and …
ytc_UgwnIJPXu…
G
i can't believe that there are people so rude and idiotic who don't understand t…
ytc_UgwwnE7fR…
G
This legit the same story as marcus in watch dogs 2. Predictive ai says he will …
ytc_Ugy9bRfPL…
G
I asked AI what to do about my pizza toppings pulling off leaving just sauced cr…
ytc_UgwZEm_hI…
G
these people resigning from google saying AI is sane, something very fishy going…
ytc_UgzTE-Tzu…
G
I have a question if i use AI and redraw with digital art dose it count as drawn…
ytc_UgzDFYUgP…
G
I have total of less than ten chats with Ai. Going late into the game. I found m…
ytc_Ugz81sxA5…
Comment
That has little to do with a human officer panicking/being biased and killing unarmed civilians mistakenly thinking they are armed, or mistakenly shooting innocent people in general. Robots would, because they have no inherent fear, bias, or actual life to defend, reduce those situations drastically (as long as we program them to value human lives and not their own...). Officer status is not considered by people involved in these split second decisions.
I think what you're getting at is if someone *does* "assault" a robot, what is the reaction from human officers/criminals. Not sure how to handle that case. Do they shoot a guy for destroying their robot? I hope not, but probably..
But most unarmed civilian deaths caused by police are cases without an obvious assaulting aspect (traffic stops are a huge one), so it would at the very least help eliminate those situations.
reddit
AI Moral Status
1574785927.0
♥ 80
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | utilitarian |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_f8sbal5","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_f8tgif5","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"rdc_f8srdjf","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"rdc_f8sp1im","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_f8sk2za","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]