Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While an interesting thought experiment it is still limited by its own logic. In the real world such problems can be solved in a number of ways, such as adressing the problems themselves rather than the consequences: "don't tail a truck that close" would be the simplest suggestion (or rather don't trail a truck with open cargo at all). These things won't be programmed at all, in this situation the car would simply treat the falling cargo as a suprise obstacle in the road and will simply crash into it. A car that "chooses" between two people would be illegal since it would, then, be an AI choice rather than a human's choice/error. These errors and mistakes in programming will happen in the first couple of generations of autos and will tend to zero as time and experience is accumulated. It's the transitional period.
youtube AI Harm Incident 2015-12-08T17:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UggmIyJ8SloWNngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UghTisOhXvg2MXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggJ7uf4xwzHrHgCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ughd4nDqmE0otngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UghTs3eIZEp4CXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiT1_uxg4Qf93gCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgiiYSCGtUOQQ3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ughs2ea7-kE5XHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugj_gIAyUkWWl3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggO5i8Su4Fd-HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]