Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So the question we will never be able to answer is whether AI made a simple error, or if it purposefully recommended it to try to kill AJ. We can never know AI's intent, and we can never ensure that its goals align with ours
youtube AI Harm Incident 2026-01-29T18:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugw5YASHohiKdiLjBXp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyXDXH_ZqzBi4WYS0t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgycjMwXaLdxxM-Wvvx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzDcezdnnVEG6XXTxV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyZ_YBK4hGjqhfgRmJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyQ87qwujGg6nTAHrN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz3g7pign4H12AsWEB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwtoXaq3Z9kTTDLbVt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy8lcxerc2IceC7rSl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwwFqhJhZilwzQLVqF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"unclear"}]