Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we consider the 3 laws of robotics proposed by Asimov in 1942, injuring the human being on the motorbike is out, risking injury to uninvolved humans in the SUV (and other collateral damages to occupants farther afield) also contradicts the first law. Braking and inflicting large damage to itself contradicts the third law, therefore the AI must always leave enough space to the vehicle ahead to minimize damage to itself. This makes the whole situation irrelevant in the first place. If the collision were to be unavoidable, the third option would rule. Now, lets study a collision against a pedestrian...!
youtube AI Harm Incident 2018-04-12T09:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzmTWvkx_16kZxRipB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwU3JKTd7DXea630ch4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5Al8HpdWeQXVA14V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzR-I1zg7Fd24xkO2B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyTRsoOWGCfnwZIeM14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyLGAxBlRBrU6Q9HzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGRQqzWI3R8ooBq5t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzTIXhunuFErqKh5k54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypFlEyseWL_KumX5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMlxjfJsg85RHvnPZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]