Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think there's a gap here when we narrow down the safety problems with 'self-driving' to power in corporations, and when we assume that automated systems have no possibility of error because they can't get tired or distracted. Humans can't glitch because of an unexpected code interaction and often stop to think if solutions don't make sense. Humans also have physical bodies that interact with the actual world, giving us a lot of tools to process unfamiliar situations in an embodied sense. A big problem with self-driving which is alluded to in your introduction but ignored in the conclusion, is that software is a terrible structure to try and encode safety into. It's incredibly abstracted and fluid, and it's also impacted by human error. You don't have to look far to find software glitches caused by spelling errors or incorrect character use, and that's especially true when you have a technology scaling up. A better comparison than pretending that computers aren't victims of human error, I think, would be realising that every car piloted by the same software is likely to be impacted by the same human error. Thus, it's less of a reduction in error and more a question of whether you want to trade driving your own vehicle out for a set driver (with a known reputation that you can look into). Honestly though, people need to stop imagining that computers and machines are distinct magical creatures separate from the humans that make them. Software is programmed by people, deployed by people and runs on physical machinery with distinct real world affordances. Obviously.
youtube 2026-02-11T17:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyzb-896IknvLKXd0Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwTNrQ5vvypk0xxoiJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzIuOlJ6BMj2qMw0rJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwq8lBAqxYa4dV4paB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgyNUOUlecBZNzTSGdx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0Y6EQ_KzGMeUQZAN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyEA9ONgqIAYH061YJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxvrHSxKVjppgFk2jh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwuw5yjyWRpL1lro694AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgytH4OWu-84YEPmfjR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"} ]