Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am not a fan of self-driving cars in its current state, as well as what it holds for the future... With that said, "taking over" requires a reaction time, and or seeing something coming. Since you can't be inside the mind of a machine or know what it will do, or how it will react in a new situation it can take a little longer to react. Also it might not have been avoidable even if both had near perfect reaction times. Accidents come out of seemingly nowhere at times, or through a complex series of events like a chain of dominoes falling. One car pulls out without signaling may force an oncoming car to change lanes, and on and on it can go. General rules of the road like not following too closely, not speeding, stopping at stop signs, etc, etc cannot prevent all accidents. The only way you can really prevent the vast majority of accidents is having a centralized system that automates the entire process. When you take competing systems at the car level it will always lead to more accidents than if there was one system controlling them all. Will that "more" be the same or more than human drivers? That is possible as well depending on any number of variables. The key in accident prevention is having as much information as possible so you... or the machine can make informed decisions. If you had an overlay map of all the car positions on a screen you could determine whether it was safe to make a turn, or change lanes or not. So a car speeding up a lane that you don't see, you wouldn't turn out in front of them. Autonomous cars will be safer generally speaking because they will have better information through a variety of sensors. If you were to take the top of the line systems of today, and only give them two cameras fixed at eye height, 2 mics on either side ear height, distance, etc. Then add the dexterity of a modern human in a mobile frame that could act and react as a human would and put them in any number of scenarios and environments. Do you think they
reddit AI Harm Incident 1490538740.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_dff5uc0","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"rdc_dfeue65","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_dfenat8","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"rdc_dfey9h8","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_dffg501","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]