Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
[not native here, sorry for grammar] I see that (one little part of) the problem is we would need "ratings" and "data". imagine the minimum problem: killing human A or B (assuming inevitable), with no significance differences, totally in the same condition (say, both with helmets or belts, same car security rating, position on the road, etc..) THEN we need to come to a "value" of their life. when we deplete information about probabilities, status of the vehicle we need to come to "who they are"....sadly. and we are uncomfortable with this, our culture is... we like to think that there's no "rating" in what we do, that my carbon footprint or my job or my hobbies or my behaviour has no relevance in comparison to other's footprint or job, because i "like my job" and "i'm free to choose", "i have the right to happy no matter what". human A and B is an oversimplified version, because you simply can't find 2 humans with just one difference. but we still need data to compute, and to choose A,B,C,D,.. we need to put computed data on a scale. and the scale is the problem. what's worst is that at some point we will NEED to choose, otherwise we can't introduce self-driving cars, THEREFORE not lowering the general incident rate, HENCE allowing more human to die just because we can't find a human capable of authorizing an emergency procedure. on purpose, this time. i love tech progress :).
youtube AI Harm Incident 2015-12-08T19:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UghSiRcVXA-3FHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugg36gd_wQOCXHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UggzSEiGsQNLKngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UghidMHZsCybB3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjzNTXzuzIxOngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UghfmsovrnUJPXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgjQy7gtc5pA_XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugio_pXgICTxCXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggKQCpjXBYZKXgCoAEC","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgggitcG_CbrUXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"} ]