Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Actually, I've seen videos on this very topic and what they are doing. It mentioned that part of the slow down with self-driving vehicles being fully self-driving (instead of the partial self-driving where you are still supposed to have a fully licensed driver in the driver's seat paying attention in case of a mistake) is from them trying to do that and basically running into the trolley problem. And people have used such thinhs as a debate, namely the common hypothetical situation of an unavoidable accident where the car needs to either hit a cyclist wearing a helmet or a cyclist not wearing a helmet. If they're programmed to hit the cyclist wearing the helmet because that person is more likely to survive, that inversely makes it safer for cyclists to not wear helmets so that if they end up in such a situation, the self driving car won't choose to hit them. If, however, it chooses to hit the cyclist without the helmet, then that cyclist is more likely to die, potentially turning a non-fatal accident into a fatal one. But as I just said, these sort of scenarios are variations of the trolley problem discussed in philosophy, a discussion that, with variations, philosophers have had since the dawn of philosophy (I say variations because obviously Plato wouldn't have mentioned trolleys, but still discussed problems of with the same inherent meaning). We can't delay the number of lives saved by having self-driving cars until impossible philosophical dilemmas are resolved. That same video also mentioned some (then recent) statistics for accidents involving self-driving cars, including details of the accidents, as well as explanations for some of the "evidence" people have of how bad they are. Regarding the "how bad self-driving cars are" aspect, they mentioned the well known video of someone testing a self-driving car on a track and showing it running over cardboard cutouts of people without even slowing down. Their defense was that the sensors of the vehicle were able to detect they were not people and that there would be more harm to the passengers by braking hard or trying to avoid the cutouts. As for the statistics, while I don't recall the exact number, they showed that the self-driving vehicles in a certain area within a year had significantly fewer accidents than human drivers, and of those accidents, only 3 were with pedestrians. And while 3 accidents with pedestrians is not good, 2 of them were the result of a pedestrian walking into a stopped car. I believe the third was at such low speeds that the pedestrian didn't even need medical attention. So this point of humans focusing on the accidents self-driving cars have and not the larger number they prevent, or the larger number human drivers have, really holds people back. But a lot of it is targetted negative media, people against the idea already intentionally misinforming others using partial information (self-driving cars hitting cutouts, but not looking into why) and insane hypotheticals (the trolley problems), as well as unfair comparisons. It's very much the same issue we face with nuclear power replacing fossil fuels.
youtube 2023-08-16T03:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzLciLOwxqBH1njt514AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGFIDda8eA4_vOdpB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxg3LXjQgthWDqb7bt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzju5sLTZkwAPK15MR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwYYxvyKxiqLZdOFR94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxg65uHegCCheJHrTN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwZxBImuXFNquOqPvt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyk_cspxKRoeeR008N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwzG3KHXtFLCecFFul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgySDaNaWRiooVqu6dZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}]