Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why was the self driving car tailgating in traffic so that it didn't have time to brake? "Given an incompetent programmer made unsafe code, what should this same incompetent programmer program it to do after the unsafe action causes a no-win scenario?" That is an invalid question. The answer to self driving is the same, in this and all scenarios. The car should slow if it is unable to stop for the predicable failure conditions (loose loads falling off, kids running out between parked cars, and the like). And, in any scenario, the car should stop within its lane. 99.9% of times, that's the safest. "But what about that 0.1% of the time?" doesn't matter. The human got it wrong then too, so just brake in your lane, and you eliminate all the moral questions, and overall, save lives. There should be no choice.
youtube AI Harm Incident 2023-10-11T20:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzyoQHfkvKymBmesal4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxjDv4Z1CBjO3WJHB94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyTShSLnJ9cwL-Lbw54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw2KRypsW4jIcRnBj14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw5i-H8AgNVVhk3L5Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyLQD8OSQSPK_gIU7Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxcoaaNEw2BPDxcR8B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugy1OKJlc-JkmZ4FchN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzCOQgM8jWOa4MaGkB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxcp38aM6btPES_LwF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]