Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Maybe we can make AI smart enough to get between fighting entities and start asking the question to them, "Should I want to mess up humans?" I pray, "Humans, don't". What's in order coming into your mind?
youtube AI Harm Incident 2022-12-10T04:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyyhFjCIOocUiTP4_p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyP5yg-Rs6VPWF8WHF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw5w1oXPl4D_Ad0sdB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyL67H2JZF3yLvNyc94AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyDPSsBhH4jiMneJLR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw9TFdlVJJD_iJd9mN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwrvCVdEKQBBkQH3PJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz7yqEy8FQ11t1HNpJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy6w9apDTS3sopB45d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyCK274GKKIE8eGhUB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]