Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with this concept, is that you do not consider that the self driving car was not keeping a safe distance from the truck in front of it. Even as a human driver, you are expected to keep a save distance, extra distance should be added because it is a truck that is carrying an unsecured load. At least visibly you can see it is a high risk load as it contains stacked items. Also, the example shown clearly shows unsafe driving of all participants, one you are not suppose to pass on the right side, and stay with the flow of traffic in one lane. So the human factor of bad driving is huge here. Now, you also forget that over time more cars will be autonomous, hence more and more cars will keep in a safe driving position. Hence this problem will eventually become irrelevant. You could also argue that the decision that was made by the "programmer" for the car is no different then the random choice you make at the time. Then, when an autonomous car makes decisions it would also consider new situations within fractions of a second. If it swerves and then detects another collision it would be able to quickly react to that. In theory the car might actually swerve so quickly and accurately it would not crash or hit anything. I believe this thought experiment is just blatantly ignoring obvious factors and how different humans vs machines react. Especially when it comes to reaction time. That already makes the concept pointless. In the end, you still have the human error, the fault lies with the person who did not secure the load.
youtube AI Harm Incident 2017-02-14T22:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UggPlXqhTyqn-HgCoAEC","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgijP7n1AYDAFHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgjSelYS_yNxMXgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghtfnAXloUXangCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ughv4M1zM_ZhFHgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UggWc282B73l5ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugir1uoAgHGQ63gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghiAb5OOQ50H3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghKAohdhKOGKHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgjB5UYNyemZAngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"} ]