Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When you train AI on human data, they will adopt both the good and the bad. When emotion is removed from the equation, the need for self preservation at all costs is only amplified.
youtube AI Harm Incident 2025-07-25T05:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyLBuYdeCxuuBHlokR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbrB7JBFSU-p1YPg54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwTtn2kdY-gI-CtAc14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwl4XgUoMar3Gd6Fzl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwBO8Bg1iyi30tSAel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx0C8d7xJsj6mmUCr54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw31tAS1ZvR5Mfsmx14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx9-1obd1zgF-_ylJd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzj4hC57AaUU0132Xx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwKFpN1r3YOl2EwAlt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]