Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
THE MORAL DILEMMA: A 100% autonomous vehicle is racing down a road. Something happens that causes an unpredictable accident and dangerous scenario to occur. The car has lost control and is now speeding toward two different sets of human beings. It does not have time to avoid all of them, it might be able to avoid one of them, the only other option is to crash into a wall at potentially lethal speeds for the human inside. The "robot" must now decide between smashing straight into 5 young school kids, or five older men. The school kids are more difficult to avoid, any logics engine would calculate that the children are to be hit. What do you think about that? And more importantly, you know damn well you would have smashed into that wall, taking yourself out potentially, but sparing EVERYONE in YOUR CAR'S PATH. But your "autonomous car" won't see life that way. It's going to use its complete lack of emotional understanding as its excuse to choose which humans get to live in that scenario. Now, I did a very poor job of describing a "moral dilemma" with autonomous AI. There are others who've provided you with much more realistic and frankly gut wrenching scenarios, just Google them, and you'll see what I was trying to convey here. Human emotion is so essentially vital in each and every aspect of our daily lives and how we interact and communicate and live safely among each other. Logic alone is nowhere near enough to create a safe, functioning, effective AI, and it baffles me why these "great minds" don't seem to really care about this fundamental problem. Anyway, whatevs, maybe you see what I'm trying to say here.... :)) Stay human ;))
youtube 2023-03-01T03:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz4rAoE_l3rAklWmgB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxjpg0T_DQ_kRiK1I54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxLA2xArwXWJ3b0f0F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwFwLZ616gHrJfRsZh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZ8R-5R0RB9ZKlNJR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxi7IEFLkLG2i_yAUh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyH-yx2mwy7asW3f7l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw3fc0r6GI_qO1-lWZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyDKBsiteKsLVN9DPZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyTvkzbucpnXNSor2Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]