Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Scarfhead I hear the sentiment in what you’re saying, and I don’t doubt it com…
ytr_UgwXBmGwr…
G
I saw Shad trashing Ghibli's art and character design, saying they're "uninspire…
ytr_UgxnBAXel…
G
They need to put custom constraints/limits on these AI's for these specific task…
ytc_Ugx0E_Qo7…
G
Sadly 99 percent that AI is being used for is garbage. So the next AI short you …
ytc_UgwdJ5r_H…
G
im sorry but these Rightoids will scream efficiency when techbros are looking to…
ytc_UgzNC4Kf8…
G
I'm not an artist. You don't need to be an artist to feel that AI art is pointle…
ytc_UgzZe40Vs…
G
Well congratulations! AI is already being used by Israelis to kill Gazans in an …
ytc_UgzJxsiCa…
G
Listening to this guy who develops AI it becomes obvious why bots always sound s…
ytc_UgwubwE_6…
Comment
For me, I wouldn't say that it was premeditated homicide. It was an accident after all, and if death was really inevitable, (which the self driving car could probably calculate), it is not premeditated homicide. The self-driving car's ability to record an accident should make it clear what the best decision was (if death is really inevitable).
I can kinda relate it to the trolley problem. In the trolley problem, death was really inevitable.
Again if death was really inevitable, I think it would all boil down to minimizing harm to others and the passenger as much as possible.
Another thing, since death was really inevitable in the accident, a human's reaction would probably do worse in minimizing harm, than what a computer's decision making can do. So think about that.
youtube
AI Harm Incident
2025-05-29T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxfOtF47I1ZsgVZQIl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZU9lw8m1Gp3d_grx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy9uUIfqzEMdc3jGCt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxd_U6lnVrqRTRTVNZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzo4zGIsgNQwpmHtgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyvj_EcltAPO4vuVU14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylNjp5OBLzOvd8oRx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3sLKS_rZVlCSdqYl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtnHm6-WDQGfqYR-Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBIg-OlidjNUbf47x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]