Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
From another source: [https://arstechnica.com/cars/2019/11/how-terrible-software-design-decisions-led-to-ubers-deadly-2018-crash/](https://arstechnica.com/cars/2019/11/how-terrible-software-design-decisions-led-to-ubers-deadly-2018-crash/) > A 2018 [report from Business Insider's Julie Bort](https://www.businessinsider.com/sources-describe-questionable-decisions-and-dysfunction-inside-ubers-self-driving-unit-before-one-of-its-cars-killed-a-pedestrian-2018-10) suggested a possible reason for these puzzling design decisions: the team was preparing to give a demo ride to Uber's recently hired CEO Dara Khosrowshahi. Engineers were asked to reduce the number of "bad experiences" experienced by riders. Shortly afterward, Uber announced that it was "turning off the car's ability to make emergency decisions on its own, like slamming on the brakes or swerving hard." This is why I would trust Google's cars over Uber's. Google's business model does not require their self-driving cars to succeed. So there's less pressure to make sure there are no hiccups along the way. Uber meanwhile, has their entire business model kind of riding on this. The pressure is much more immense to succeed. You would think that this would make it more likely that Uber's cars are better, but pressuring your employees more doesn't make their success more likely...it just makes them more likely to make it *look* like they have.
reddit AI Harm Incident 1573268878.0 ♥ 12
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_f6xvg2j","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_f6y669s","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_f6xcnzm","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"rdc_f6xim11","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_f6y4amz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"})