Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is not a ethical dilemma, most likely a self driving vehicle would be dictated to simply break if an object were detected ahead and not have a response so heavily based on so many variables to cause least damage, the option to continue into the object is stupid as just breaking before impact would cause least damage as any vehicles behind could do the same if necessary
youtube AI Harm Incident 2015-12-08T16:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugi1caexrD8SbXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghiyJc91JDD0XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgjSNaOTvVqec3gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UggO6Otzv4JUNHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UghVla3dyO4UzngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgiZ-f7irifHvXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UghhlM_s-YcMu3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiQEwrx-Avbs3gCoAEC","responsibility":"user","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgiwHyTQg5h2BngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghMEOjSVojWFXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]