Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I ride a scooter but as you can see from many of the comments, this doesn't have to do with Tesla, this is the driver's responsibility, AI or not. There are fires with gas cars and there are many accidents with gas cars but it seems when these occur in a Tesla, it's somehow different, as if the car should be perfectly correcting any human error, I don't get it. Moreover, when using the Autopilot in a Tesla, the car reminds the driver to put their hands on the wheel, and if ignored over 4 times, the person can't use the Auto Pilot anymore. When someone is involved in an accident they usually try to see how they are not responsible, in this case, they blame the car's AutoPilot system even if it wasn't actually active, they just hope no one will find out.
youtube AI Harm Incident 2022-09-03T18:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxX-Ayn7yUJznfc0t54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-PlPlFxZMlrKl7L14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzvDnbzXfa5FBoo7Zl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwgOExhv5yZa_XdtpB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyoednUGu41_yo24md4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwANF7DrFOhKxikA-R4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8e0l4Nu2l9bYutH94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxG7tG4EnJHJkTtht4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzErz4Be_uJzuUAsAh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzncrNhH5J6ZBBj3N94AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"indifference"} ]