Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What they are forgetting is God!! God will never allow for that to happen and wi…
ytc_UgxokDeMC…
G
Option one: If one country achieves AI supremacy first, will that inevitably lea…
ytc_UgyCFdb0n…
G
I cant believe we got to a point to saying
"Grap a pen and learn"
Like it was n…
ytc_UgynJm-Mo…
G
I cannot stress how wrong the professor is regarding Asimov's laws being LLM ali…
ytc_UgwyP5i1Q…
G
@richardmcbroom102 would you really trust your consciousness to be uploaded to t…
ytr_UgwY5Dn5T…
G
This just in: the ai does not understand that lobsters will die if they are out …
ytc_UgyDVEksS…
G
i honestly wouldn't have much of an issue with AI if:
a- the people making the M…
ytc_Ugw02vdwb…
G
As an actual artist who has seen a lot of AI “art” online, the moment I notice t…
ytc_UgzfnKoMh…
Comment
There is definitiely still some tough choices, the trolley problem might not be exactly what we get but close enough.
If the car is going too fast to stop safely who takes priority, pedestrian or passenger?
If the car is autonomous it almost certainly did not commit a mistake, so maybe there the passenger's survival takes priority and instead of driving off and killing them to save the passer by, it reduces damage caused to the lowest possible degree.
Most of the time there will be a way to do no damage to live people, but this does matter because there will be other instances.
There doesn't need to be machine error for these situations to happen, humans are dumb and might run through the street. What if the person running is a child, does that change who takes priority even if the child is definitely the one making a mistake?
Don't get me wrong, automated cars will eliminate almost all traffic incidents and are already much better than human drivers when put in good conditions they are trained for, but that doesn't mean we shouldn't care.
reddit
AI Responsibility
1648691618.0
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_i2s8j5h","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"rdc_i2smx2p","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_i2sjcg5","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_i2s4sm4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_i2s8p86","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]