Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we consider the cars to be able to make meaningful decisions, shouldn't that decision be put on par with our reaction and be considered as such? A reaction is a decision taken in a matter of milliseconds - in humans, it is even systematic (on average). If you were driving the car, chances are you would just try to survive. And if we consider the car a living, thinking object - as we hopefully do - shouldn't it try to survive as well? Put a horse in place of a car here, and now you get ethical dilemmas with horses. If the horse was taught to crash into the most harmful object, it is still the horse's fault to some extent. The question eventually boils down to whether or not we should trust our life into the car before we hop in. If we do, we must accept that the most meaningful decision for the car is for it to minimise the damage to itself. Will we ever treat artificial intelligence the same as ours? If the answer is yes, then you will see AI go to jail for something like this. If we consider them as tools, then we shall do as we always did.
youtube AI Harm Incident 2018-06-21T11:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxNu6orq72VYmfHfwB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzZh_afhC_OOFGLQXR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyCTPsECAP96PGh3Hl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"resignation"}, {"id":"ytc_UgxJKKQ9sqMp_Ti81H54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxqNM8gW2hHoyqHe5l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxnIUdclFIQ-4Rv2z54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzlBBbufEX3_0ASe354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwCGqFtlXMJ6s8DwaZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwtXdfQuJN50FtVCfZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyosQqVFTeUat0GwLx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]