Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i dont think chatgpt was trying to make a distinction between "i" and "chatgpt". i think it was trying to make a distinction between "it recommending someone replace chloride with bromine" and "someone reading a chatgpt log and then deciding to replace chloride with bromine". i know you brought this up later in the video, but then i just wonder why you included the line "almost like it was trying to distinguish between 'I' and 'chatgpt'" at all.
youtube AI Harm Incident 2025-12-12T22:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyxceGl4FxY7CYHQRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx3fPIOEStMtV95j9x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw7KeLpgU7zqKqgIbt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzRDMKLc96E6JoCd0B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwsd0RPFGBi4hAdsXd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy-5qspWuYCuSgvgxx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxG6mF4ZTrsvs6t0ll4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxbXUNRMmVE2PzcCQ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzUuybSKplyhwX_f1R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgysP3vyVDai5_7-9Ut4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]