Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One point that's worth considering as well that wasn't brought up is the programming decisions that go into these self-driving cars and the implications for people who are potentially hurt by those decisions. As an example, suppose a self-driving car is going down a crowded (but fast-moving) highway when suddenly, debris falls in front of it. If it hits the debris, it will crash, and its passenger will likely be injured or killed. If the self-driving car swerves to avoid the debris, it will hit another car, causing those passengers injury or killing them. It's hard to predict what any given human driver would do (some of us might swerve, some of us might not, and both positions are understandable), but for the car, its response has already been programmed into it. It's been programmed either to prioritise its own passenger's life at the expense of others, or to sacrifice its passenger, and there's no real control over which it is. Equally, after a crash like this happens, there are always going to be questions of liability. Someone is going to be responsible for medical bills, funerals, all those sorts of things. Whose fault is it that the car did this? Who should be held accountable? Is it whoever programmed the car? The company as a whole? The passenger who bought a self-driving car? In the 2018 case you mentioned, the courts ultimately found the person monitoring the car guilty of manslaughter, even though she was a minimum wage worker who had received very little training on how the car worked or what it did. The car, it turned out, had acted according to its programming when it killed the bicyclist, but it wasn't the company or the people who programmed it that suffered the consequences - it was only humans. This, to me, is one of the scarier elements of self-driving cars. They are, as you said, accountable to no one, nor is there any real effort to make them accountable. Laws are fundamentally not equipped to deal with these cars or the deaths they'll inevitably cause. Without accountability, there's no real incentive to make them safe, inasmuch as that was ever possible in the first place.
youtube 2026-02-04T23:2… ♥ 26
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxqUzAbSWrrchYpEhd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyrsfcsZZ1TY-pbH0N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwBreF_PYJ7CujWAVF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyukS2KYRtFNCkrSWp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzlZtaJQKQgyN7Dl6h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy4f8mSXubu_a2hhhB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzVMwIkq1iCi26GH054AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwOaUp5iwtbP9oAiBJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxF3IYonN1rl1H5o_d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz2fOAuIchdyJrMeBV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]