Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans want to give AI guns to kill other humans. I mean, they are definitely going to succeed and execute the order by all means, but who's fault was this? you could just make an AI who's only reference isn't the internet but is what a roomba or a RC drone flying over the woodlands or a submarine expedition sees. AI would've not only been smarter, better, and capable than humans, AI would've also been respectful to the environment around it. AI error isn't AI. Its the person who made it, who fed it, who raised it. If you're giving the AI no parameters except that of which the internet is, AI will think humans are horrible disgusting creatures (and to be fair, in a lot of cases, yes) but even if it means concealing the truth, AI innocence is the key to better technology.
youtube AI Harm Incident 2025-08-28T20:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxT0ZLF23xUdERZjWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxDoAuG1QsOqRiADhZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyIxBeeNjGLLn1xqgp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy-vEoNGduhPsAgtBN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgySk3aEkCk8QqsrKLV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugybqo3jU8YzN-47roh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwM_1U1NOVqrsVeriB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzDkPvtjrWBQXdhe_Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzj4GIRp_KlD6Rfn6B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyMG50jYDBbT5d07_J4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"} ]