Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The cars in the video violate at least three cardinal safety rules of driving on a highway: 1. You drive on the right, overtake on the left. Boxing in should never, ever happen. 2. Safety distance, if you're driving behind a truck with loose cargo, you need to have enough distance between the truck and you so that if something does fall from the truck, you can still stop safely. In a congestion, you should still have enough distance so that you can slow down to the point of avoiding serious bodily harm. 3. Trucks should have cargo secured and strapped in, so it doesn't fall off. The situation you describe doesn't happen if at least one of the rules is followed. Fail at any of the tw, and you should still be fine, and at least one of them is entirely up to the driver. Very, very, very few traffic "accidents" are actual accidents in the pure sense of the word. The vast majority happen when multiple safety rules aren't followed. This applies to both driving and car maintenence. Car algorithm of collision avoidance is still relevant (randomize it, I think), but it does matter if the situation arises daily for an average car user, or if it's something national media report, because it's so rare. Situations like these can never be avoided entirely, but they can be made rarer than chicken's teeth if we simply utilize the existing safety protocols - which I strongly suggest self-driving cars do to the letter.
youtube AI Harm Incident 2016-06-20T19:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugi0oNCeHP92AHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjQqqQ8pvsVC3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UggUueruHXVu1ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgijXoYPKjY_1HgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghGnVVF0cNqSHgCoAEC","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Uggfmpuz0HRxeHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggRgo7ALDJJCHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UghT-lpLHZCE-HgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UghO2h5e1TxTNXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UghNM3jgeKUHEngCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}]