Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We have also kinda programmed ourselves what to do in situations like this. We know that we should avoid collisions with objects and cars, but given the circumstances we decide how to act. Perhaps if i will see in the SUV someone in the right side of the car, I would prefer getting hit, but if the driver is far from the edge of the car I would rather hit the car. One can program a computer to make not specific desicions, but to consider the circumstances at the moment Edit: I wrote this without watching the entire video, but the motorcycle problem will not be a problem for a car. A robot wouldn't think ethically about "punishing", but try to maximize the chances of survival for all sides, so choosing the helmet wearing motorcycler wiould be the "correct" choice, or perhaps the computer would prefer getting hit because his chances of survival are way better
youtube AI Harm Incident 2021-06-28T10:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyHUa4MwqTkhowKM4h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHkU5bsdw5P7-8JZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzdBJyQxP861oljRU14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzgl7iBXRe5tFYLHYt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzUKFC8r1V6HNUTydd4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxFxLtLEjGce9wq8iZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxI7TNMt_TdEOfHQpN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw6WKiqU09QhjUsxKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzCgvj2u0e_RN985ix4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxaV7FZRp4oz9oNI6t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]