Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You seem to be telling us that the car did not see the pedestrian, but as far as I know that is not yet known as the accident details have not yet been released (especially the car data). There are clearly circumstances that would lead to an accident even with perfect vision and perfect software, so your assumption that the software was less than perfect is, at this moment in time, unfounded. One interesting question I have, is to what extent self driving vehicle software is designed to handle the aftermath of a collision. Does it simply slam on the brakes on collision or does it try to find a safe place to park? I think a human driver would have to assess the situation in detail and I wonder if such assessments have been done by the self driving software designers or if they are assuming no accidents. I see lots of test runs for scenarios, but zero published test runs for post accident scenarios.
youtube 2018-03-21T12:2… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyBwr53QSrFwsgZ6Fx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgzFdYKkc3EPrgIw4QB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxyqDNXW822gUmjenp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwILlhd8deTJLVCDhV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzyAX9tJXO7m_YMoOZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwmTB7eP-BaIKS3dZl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyv__Up7HC5I2xbfa14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxdjWT7OaG50E2OsJR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw0hEIGcJirB1lT3Ut4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw92ZW-Q11YB0JOI9Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]