Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
you got me wrong on my last sentece: I didn't mean that it has to be perfect at all times. As you stated it also isn't perfect with human beings. But a computer with blind spots that typically isn't aware of them is so much less capable than a human. Errors on special occasions are currently not the exception for AV, but the norm. There has to be a way for the AI of AV cars to deal with that in a capable way. For example with slowing down to zero speed calling for additional information. But it can't be "there is an area I can't see anything, but I will just cross it blindfolded". Or even worse: "I see what I see, and what I don't see doesn't bother me at all". That isn't an acceptable strategy, neither for humans nor for AV. The fact that there are human drives who don't pay attention to such blind spots (or have a distraction or whatever) is the reason why there are legal cases against them if they cause human casualties. And that is something that can't be ironed out with statistics. "Oh we have human casualties, but they are less than the normal human errors". For a human driver it means that he can lose his driving licencse on such an occasion, for a AV system it can mean the same - but for the whole fleet that is equipped with it.
youtube 2018-03-21T01:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_Ugw-wvkKZc9KIwiNtj14AaABAg.8e1mUHM8HBe8e2Dld8YSR8","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxNSoZT7mB4mHEBUfJ4AaABAg.8e1gkiWiWJD8e1vaOKA7jw","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugz5dBIVBv9sS55rQt54AaABAg.8e1cQlxEJlU8e1f-gRY4nP","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytr_Ugz5dBIVBv9sS55rQt54AaABAg.8e1cQlxEJlU8e1jRVBNEtQ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugz5dBIVBv9sS55rQt54AaABAg.8e1cQlxEJlU8e1nOKZl3us","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyD1b6mLwQFFskEhzd4AaABAg.8e1cBq7EEz88e1nq_0j6Zm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyD1b6mLwQFFskEhzd4AaABAg.8e1cBq7EEz88e24J28nNI9","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugxb00GdocSAr288C6J4AaABAg.8e1cAXsVQ5T8e1gK5lR_tJ","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzpHpTcsqDjgsCglxN4AaABAg.8e1bW4vmhAg8e1tKDxT-C6","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugyyf3r8R8WFb0kWMI54AaABAg.8e1bTumnqMv8e4GT2q8K9i","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]