Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is not as big of a deal as one might think in that if an AI can have authority to employ lethal measures then they would likely have strict rule of engagement (ROEs) before they could do so. And if the engineers are not idiots that build those they will have safety features that keep a bot from going bizerk as well so that a non AI system can deactivate a crazed AI one so that it cannot control the vehicle any longer. Two such ROEs would be time and place, where a bot can select a target in a certain radius fork it designated aim point, say a 100 meter radius from that. And it would likely have a time window as well say 30 seconds. So when that time window expires the Authority could be revoked. Sure that still could allow for some errors, but so can firing a cruise missile at a target can as well, that is the nature of war. Another ROE is a minimum threshold of ID certainty, something that cruise missile doesn't have. If it is not sure that the target is one allowed for it to hit then it can not hit that target but hit a default location or circle around an wit for more instructions.
youtube 2026-03-10T19:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxXDGygvxhKOLB1hbZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw9Q0xh9YzYo7A6g2Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzRVCTp_1wHz7lIJv94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy84HB8qHK2enSEPwx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxGAcKqP9ouh9fvwPF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyIZUJMmUwSwmr3Di14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwNAZESJs6SnlTooi54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzEGHIj3h0QYVumwUd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy1VMBLGkV6gqrHbW54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzOAQfFmmlEv2YYo6B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"} ]