Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel misled by Uber, who originally claimed that the pedestrian walked off a curb right before the car passed. That is not the case, the pedestrian was walking perpendicular across the street. Human eyesight could not have avoided this crash, but an automated system with lidar or ultrasonic radar could have. Uber's characterization of this incident made it seem like even the car could not brake in time, and that seems to be false. The robot may not be at fault due to the technicalities of the incident, but it failed to do what it was designed to do. Then the question is, do we judge automated cars by human standards? Do the optical sensors take precedence over the radar data? And does the car only have responsibility to respond to optical data? This is a clusterfuck of responsibility that needs some working out.
youtube AI Harm Incident 2018-05-08T05:3… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"ytc_Ugz1qxA7RlRk5aozwDZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgysHw1Yf3yNDzWDorx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzqEE5klv7vKudZNDJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwUcsD5HnYvpyTgOll4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwHIrj_foFNO5HEv-J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]