Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the discussion of edge cases really misrepresents where we are at with AI. The problem is they are not good at extrapolating. The very fact they need to train on these edge cases shows that. A human will not need to collide with a charging elephant first, to know to stop. But the current technology cannot make the correct decision for something it hasn't already seen. These cars will be safe only when they can be trained on *typical road scenarios* like a learner driver would and then succeed in being safe in the edge cases, not the other way round!! No person needs to drive 1 million miles to be a safe driver.
youtube 2026-03-27T14:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxGDzvI_UtT4CV_Id54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwhTa9_wzy6HmEVtNh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyRvpSbb4zt4DN1okl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw31_WjUIyW6810Qo94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJjhQC3yDz060wHEt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]