Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a very thought provoking subject. However, I do think the scenario outlined has a clear answer. Swerving into other drivers would obviously never be the answer. If a truck in front of you is hauling large heavy objects, the self driving car should make the premeditated decision to keep a safe distance. This way, if the heavy objects suddenly start falling off the back, there would be ample time to for the car to stop and allow the heavy objects to tumble forward away from the car. Minimizing risk to others doesn't have to mean heightened risk to self if the car is proactive in avoiding incidents. There will always be freak accidents that driving algorithms won't be able to compensate for, and that's where making cars safer structurally comes in. Regardless, putting other people's lives in danger because of you or your car's misfortune due to a fluke will never be considered okay in my opinion. The video assumes that not swerving for the tumbling objects means certain death for occupants, and I don't see that being the case in most any scenario.
youtube AI Harm Incident 2015-12-08T19:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UghSiRcVXA-3FHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugg36gd_wQOCXHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UggzSEiGsQNLKngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UghidMHZsCybB3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjzNTXzuzIxOngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UghfmsovrnUJPXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgjQy7gtc5pA_XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugio_pXgICTxCXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggKQCpjXBYZKXgCoAEC","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgggitcG_CbrUXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"} ]