Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm by no means any kind of tech person, but regarding which to trust between visual & radar, is there any reason why the two can't talk to each other to make decisions? Taking these two incidents for example, the visual saw two taillights close to the horizon & determined it to be a distant car, but radar would've seen a smaller object that was closer up. Had the Teslas still had radar, is there any reason the AI couldn't have taken the input from both sources to determine that it was in fact a motorcycle? Also, in regards to the autopilot shutting off to avoid liability, I feel like you should still be able to hold Tesla at fault there. Since the system only shuts off a second before impact, but average human reaction time is approximately two seconds & change, meaning if you're suddenly barreling into a crash, a driver still wouldn't have time to react & take measures to avoid it. I know the obvious argument Tesla's lawyers would make is that the driver still should've been paying attention to the road, but when they actively sell the feature as "autopilot," that pretty much is telling drivers they don't have to pay attention while using it, hence why it's illegal to sell it that way in Germany.
youtube AI Harm Incident 2023-02-15T06:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxtCiVpffOIctlBNGh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDzA8SvJD0OiiCKWR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxIYNQ-rfNuoSFZbAh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwxIp4cTEXVxDemmQ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz3cmJCyI4JBfOdxgV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxehAAVxBIyhzTv18V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxQw3oxc3lCtAj2a6p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz-MoShOvlOgd7wgMl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugwtx2KDiZFHXqM1Meh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2gh6gTVL-CFe4z-J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]