Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
And somehow people are surprised that AI models trained on human data act like a human. The flaw in this model is the idea that humanity always does what's good and right as opposed to what's in its own best interest. The AI companies themselves are doing what's in their own best interest, all the way to the top. Why would the model be any different?
youtube AI Harm Incident 2025-09-10T16:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwQLkqgOew7rPCC_T14AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyA0IxXawG5nvHaApd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxebNMth90XSVcTWj94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxw9oqbG0UYP8H3qMV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy6Ak8Cl8S9V0JFisx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx-kxcJJv3lU_p7KaF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy1NFQmXYWuQe-mFHN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz8S_z2bvpFxQf92Jx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzyf7-wOlXCLH1ndCt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyARE9PMQCHTrnOuZh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]