Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For me, I wouldn't say that it was premeditated homicide. It was an accident after all, and if death was really inevitable, (which the self driving car could probably calculate), it is not premeditated homicide. The self-driving car's ability to record an accident should make it clear what the best decision was (if death is really inevitable). I can kinda relate it to the trolley problem. In the trolley problem, death was really inevitable. Again if death was really inevitable, I think it would all boil down to minimizing harm to others and the passenger as much as possible. Another thing, since death was really inevitable in the accident, a human's reaction would probably do worse in minimizing harm, than what a computer's decision making can do. So think about that.
youtube AI Harm Incident 2025-05-29T19:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxfOtF47I1ZsgVZQIl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZU9lw8m1Gp3d_grx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy9uUIfqzEMdc3jGCt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxd_U6lnVrqRTRTVNZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzo4zGIsgNQwpmHtgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyvj_EcltAPO4vuVU14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgylNjp5OBLzOvd8oRx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx3sLKS_rZVlCSdqYl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwtnHm6-WDQGfqYR-Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwBIg-OlidjNUbf47x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]