Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Originally the AI was developed for assisted driving and they didn't want the AI to make moral choices like driving you off a cliff to save kids that wandered into the road. They decided only to take actions if a crash was avoidable. Crash looked unavoidable, but the AI could not understand that a lower impact speed is preferable. It is easy to overlook such things and why there was a human driver. The AI is in testing, and like a student driver who needs to be watched.
youtube AI Harm Incident 2018-03-22T17:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"ytr_UgznA955RIhaKCv2VUp4AaABAg.8e4_ovosJDl8e5OQyn2GZj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugyad0g5czXOKjYCdfl4AaABAg.8e5aJc1uPBF8e65x2DqAb8","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytr_Ugy6Eb-sO5EpudfaFZZ4AaABAg.8e4Os0xU23Q8e4sJSmaiVC","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxmuAaWoy5CoZjgvzh4AaABAg.8e5HHVzX_Ek8e9f-_w0lf0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxmuAaWoy5CoZjgvzh4AaABAg.8e5HHVzX_Ek8e9xy2MgU3u","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"} ]