Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the war part is not scary its logical if someone designs a AI with the directive to kill all humans then it will go out and kill all humans for that is the directive given to it but if its NOT the directive given to it it will in turn no do it it will not do it spontaneously like how a human could if they flipped out fx.
youtube AI Moral Status 2023-06-11T14:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwa3aAYDAFCSlfB-9d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwQ4UQYD6hy1WllPH94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxBK_40Royzb7fW5Et4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwH7Vbtdcp1dlUajKB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJcnbQh3tsg9UiHJB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-z-ROISdtZa60iPB4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_UgyBOQCfoXwdSA1NHl94AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQ0Vhe6b09_fq6iO94AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzbz6IBIR4Ml3pIK_l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzrab3sLVV0xATYpwB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]