Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Your hypothetical is ridiculous, as is your reasoning. You're also forgetting the fact that everyone said road conditions were poor, and not even a human driver would've had time to stop. You can conjure up all kinds of what-ifs, but they're not practical or worth discussing. The good news from all this is studying why the car didn't stop in time and using that data to prevent further accidents. Driverless cars can learn, humans cause tens of thousands of deaths a year for doing far less, like looking at a cell phone or driving in a storm. I used to live in San Francisco, which had a high rate of pedestrian deaths in broad daylight. You make it sound like these cars should be perfect right off that bat. Your expectations are way to unrealistic at this point, but the truth remains driverless cars are already performing better than human counterparts.
youtube AI Harm Incident 2018-03-21T03:0… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"ytr_Ugy30j-MLGTPJEoQbbh4AaABAg.8e1bdL0U05w8e1y1uQeAXX","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy30j-MLGTPJEoQbbh4AaABAg.8e1bdL0U05w8e2vzl7USVA","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugwh9SlG34FgIbbK3W54AaABAg.8e-PyLIKZM98e0I6N2lOtN","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgxmAU0la-ySz045TZ94AaABAg.8e0WIwmaDVn8e5ZbXSAw7w","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx9BDAQk5JGMtn8i8t4AaABAg.8ePmGba2peh9m_F8cmNjDM","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]