Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
13:00 Is it not obvious to you why an LLM trained not to give medical advice refuses to believe even the implication that it gave medical advice, or that it wouldn't accept liability for the company which owns it?
youtube AI Harm Incident 2025-11-26T00:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy9hiLOfY15jJyYh854AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyTB5mNdLJa5agXabh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwFLog5hlWfFcqtxE54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgypB_HXTKWxtCL8ytp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyk4EUqGq5cNmDDvSB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz__t-0eS_sfcOtTl14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgweZoBiuMuDchW1qBt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugylj7j_KS1IwmLTbQJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwiuR4OysVJ7L9yMOZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJNrzW8WeajW4BwnB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]