Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is also the entire argument of “if auto pilot is so safe how come you have it programmed to automatically turn off milliseconds before an accident occurs?”… Something that has been widely reported for several years now.
youtube AI Harm Incident 2025-08-16T16:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugw0F3XLEUsfyFOVobN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyZi3YnyHLC_KGKT-d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzb1C-4nFtWBrui0JJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxr1fXN1qD5Aw68emh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx8qZAjzVbaFYzapzl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgycVmgrDs7TRXJF90R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy8K9nkAsuSO_2tOCt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw1FwFF1iyjqD3NyZ54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy_QWat5FfUuvuMAyZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyTv1eVoVxFtR3zybd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"})