Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is IMHO highly disappointing (although not in the least surprising) that the media (that I've read so far) takes no notice of the flagrant gross neglience of the monitor/driver nor the company's obvious oversight in tolerating it. The driver/monitor should be charged with manslaughter or reckless driving. It was his job to stay alert at all times to be ready to take control if the car failed to react to avoid an accident. He was clearly abusing his job as a means of getting paid for snoozing in exactly the same way many night security guards do. In the case of security guards, most things they have to watch for do not happen in an instant so it's less serious in their cases (although still far from acceptable). But the Uber monitor's sole task was to stay CONTINUOUSLY ALERT in order to override and prevent an accident caused by an EXPERIMENTAL Artificial Intelligence. He CHOSE to fake it. If he could not stay awake and alert it was his responsibility to end his shift early. He does not appear to be at all concerned about constantly nodding off. He's almost certainly aware that his performance is being monitored on camera yet clearly feels that as long as he nods off for only an instant, it'ls all OK EVEN when it's a long sequence of such instead of an isolated occurrence. That he feels that way strongly suggest that his supervisors have been overlooking such behavior, probably because it's an almost inevitable result of paying such low wages that the only people who can afford to take the job are doing it as a second full time job with inevitable chronic serious deficits in attention in a job whose SOLE PURPOSE is a high level of alertness at all times. If I was running the company making the AI I would immediately bar Uber from using it any more. It the contract was remotely suitable it should allow that for breach of contract due to gross negligence. It is clear that the pedestrian was also being negligent. But had the driver been alert and doing his job, he almost certainly would have braked enough to very significantly reduce the speed at impact, probably enabling the pedestrian to survive and possibly without even serious injury. That this Artificial Intelligence technology is being developed with such cavalier disregard for performance strongly suggests, IMHO that it is likely to be released for broad use when serious problems remain.
youtube AI Harm Incident 2018-03-24T17:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzCQ44Cg1Md9zU1EhF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzpsJ__r0N7gbEHzNp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwlQI3p3kX7MPYT5Y54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx827I7qz11nS6-KKN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyQL2iyLyyfUKTYHuV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwOp0N0eJQbbJGK7FV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxDiYRj-r_H2-7ZEDh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyr7nzZGq9xk_Jlyr94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwoCxiJBTSmYhvxVNh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzPiLcv4IIOYGr7E4B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"})