Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's level 2, like any ADAS, it says you're in charge of the driving, it's your responsibility. Level 2 assistants don't use lidar 99% of the time, they might use RADAR, like Tesla did until 2022, but that's it. If you're comparing with more autonomous systems (level 3 or more), then yes they're using LIDAR but autopilot level 2 is not level 3/4/5. The question is, why are you asking a level 2 assistant to detect an imminent accident that a human could have avoided? Even though it says "please take over immediately"? Of course when you're ready to take over, autopilot (which is not full self driving) is indeed safe. Do you think other car companies take responsibility of their (far worse) ADAS system, when the car crash because humans is not attentive ? Then what would make sense is to look at a software that is meant to be fully autonomous, like FSD SUPERVISED. So, look at FSD >V13.2 videos, it's still an assistant that needs to be supervised, but it's REALLY good and safe, 90% of the time better and safer than any human. Buy you didn't show any fatal accidents with any version of FSD, although FSD is meant to be unsupervised in the long term (maybe 2026?)when perfected. But even if you did show a FSD crash, it still says it should be supervised and you should take over at anytime.
youtube AI Harm Incident 2024-12-29T09:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugzlilf0kjcmOnrv5xt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxJWIDt6oOiorm5J754AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyIZVw6KZCzKCEbXMx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyFs-QA1DH-NXdQTLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyZKKhyCmaofP2AyTB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyRDicqcNmcNWbYGtl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxQ97OxvHySPKgo3n54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugym16jXfrxj8jmWybl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxmw3WUMY0VbzNONA94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_6xhJE7PCeHxJkLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}]