Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
this scenario ONLY considers the way a human brain works and make decision. What autonomous driving COULD allow is that the car will NOT have to make this decision, as it ALREADY LONG ago, analyzed all these possible scenarios and already anticipated the plausible accident, including the right distance, the right speed and the right trajectories, PLUS, a thing humans can NOT do, is sharing in real time their experience and data with the entire network in seconds and in real time. Meaning that the more autonomous driven, the more collected data, the more precise will be each prediction. We will NOT have to make a choice, we will have to make them GOOD enough. I've been teaching in the automotive business for 16 years and I'm always amazed to see the reaction of the public when you ask: how many percent of human failures are considered in today's accidents? people answer in average 95-98%. When you tell them that with autonous driving, we could reduce to less than 0.01% the risk of accident, they still prefer to drive... Mind blowing.
youtube AI Harm Incident 2023-08-19T11:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwqc1_q2DdUOgJryI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1yrdGDed9s8wbpop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwEJwX76AinvoS1s5d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwRqHjlDaElPKWED_14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxXr4QY4lnbmQR0nAF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxO_Ujk5rSvOjWRKBB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxdMf8xwWtitYQSG9Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxA0UzFRipN-4avxKF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugxr6E9-mHJqZTtbMkB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgzEQs0TPw7Wr4XUt_V4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"fear"} ]