Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Knowing the right answer to these dilemmas involves advanced programming and knowledge, something that autonomous cars won't have for a long time. The only goal here would be to minimize damage as much as possible. For example, say a deer ran in front of a car. If the car knew it could hit the deer and take the least collateral damage that way, then it should do it. More importantly, if there is no occupant the goal should be to save human lives as much as possible. Additionally, if there are more occupants or children in one vehicle compared to another, this could be another factor helping to make the best decision.
youtube AI Harm Incident 2019-12-05T04:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyY8hea89mmyVR35pN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwY7v7cjww7oOQlHrt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyB0_b6TK8PT7vISdx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyufpnIhu4nx08cBkN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgydFUcos-c2u9kplcx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyP5A_agca7B0tqbB54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwfznTscFqr9M1BaA14AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzjIPWV9pMw8zPPp914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4-0RV-1R8m98VkZ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyD53mqEZwoHOxpHE94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"} ]