Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
how about making an ai who kills ai due to the fact that ai doesn't protect humans it protects itself.
youtube AI Harm Incident 2025-09-11T17:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyssvt_SsowG0Jyup94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy3HOkZ5ptP_KNsnFV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy7kIenXrL9WgdFsEt4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwVeECo2XCjgv6Y-fZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy9dYXjtUhOT5sfDfx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgzOETw8f9WPniYu3FV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwkxxcAkmc_LuO3Pnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZHF9kT0cn6pdtbZR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugw_OABAnkYuo_S-Cgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzDFiKQ_2-OI4z3SGJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]