Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Don't allow cars to move so close to trucks" "Self driving cars shouldn't get boxed in" "Have self-driving cars communicate.." .. The first line of the video? "THIS IS A THOUGHT EXPERIMENT." The point is the question of ethical decisions made by robotic systems, and when programmers are responsible for those decisions, and indeed what those ethics should be, given that they are being applied by a dumb machine. Take asimov's laws of robotics; 1. Do not harm humans or fail to act and thus allow harm for humans 2. Obey instructions from humans unless that conflicts with law 1 3. Protect yourself unless in conflict with laws 1 or 2 These instructions would horribly confuse the self driving car in the thought experiment, though we desperately need to create (and legislate for) laws like these before we develop responsible robots and most definitely before we create AI.
youtube AI Harm Incident 2016-02-11T18:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugg2BtWozk8CNngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgjFkjDPjqE2CngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjEW6MP3uLTC3gCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgjbNTENqsljHngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugj-Tm4fiodnsXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UggOUDgUdR33tXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgjGwm-c396lkXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Uggj8ubOGU2UeXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugg-1WzQ124krXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugif6gsoLWXGuXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]