Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Personally , I think as a society. Self driving cars should be programmed to maximise utility , happiness as a whole and in the long run when it has to choose what to preserve and sacrifice. And people will have to accept that term when purchasing self driving cars. meaning the car may not prioritize ur own safety first when the time comes. However, regardless, such self driving cars and such mechanism is still way much safer and will decrease lots of accidents and fatalities overall compared to the current state. Also, although the dilemma is a true problem, but personally I think it will be of a really really small number and percentage. And nevertheless, in addition to how self driving cars will decrease the number of accidents drastically, when facing such situations, the reaction timing and the choices it make will also mostly be better compared to how a panic and terrified person would react. Also regarding the motorcyclist choice. I would say choose the one, the person that brings more good or happiness to the world. If they are the same. Then choose the one with the helmet to minimise fatality. Although the other one is being irresponsible. Then , we have to create other laws like severe punishments for people who don't wear helmet and such.
youtube AI Harm Incident 2020-02-23T06:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzPNDPeBSoT_N8ajhJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgweUjyiPy-rfWUfmgx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugxic792s4BamM2UQvp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJ_xaLdkdwsYvi8SV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxLYblihBUym_tBXj94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwzcGayoQ_nZzhGNK14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzFw6BvO2d63DojTBt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzuLZK7gXq2tXy7G7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzSedXmAbGedXiEjzd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxPAmY7oRz1R-IezD54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"})