Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have driven over 20K miles on Tesla fsd 13.x and now 14.x. I haven't had any safety critical interventions. I intervene for things like speed control and other minor issues but overall the system is safe if used as directed which means the driver keeps eyes and attention on the road. It dramatically reduces stress while driving, eliminates road rage and provides a much more pleasant and safe driving experience. It is much better than human at merging in traffic and in heavy traffic conditions. It is particularly good for older drivers like me. As you age, it gets harder to follow everything that is happening on the road. Combining a human and an ai greatly reduces the chances of an accident. I highly recommend this system for older drivers or anyone who has some kind of impairment. In fact, these systems need to become standard equipment for all vehicles. The system I am advocating watches the driver and insists that the driver pays attention to the road. My Tesla will not allow me to look at my phone or do anything else while driving. It will kick you out and not restart if you fail to pay attention. So I am not advocating for attention free driving at this time. I think that will come but I don't think it is here yet.
youtube AI Harm Incident 2025-11-03T14:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxDtG9bbdQpcX6IxPp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyKPDvESNEusCIfyBR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugylzh2hxbkQI9pbJft4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzLQElVcgkTCYfJ1Zd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwhpwVfKwP2WCe3cPl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyhWr9Ovv_oYZv4UGB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxo8s-GNpIt30eN8mB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzzsiKb3l1QSkIUSR14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRBVHlKAHLUoL82j14AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyjU9ju_IndwHWrX_N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]