Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"We believe a ban that's difficult to enforce is better than a world flooded with cheap, anonymous, autonomous weapons" I'm in agreement in principle, but the weapons won't be cheap nor anonymous. Currently, military grade explosives are "tagged" to assist in tracing their origins. Why would anyone believe that military grade AI wouldn't be? When it comes to the equivalent of "IEDs" for terrorists there may be a point to make, yet the technology exists now to create an auto-firing motion detector gun and nobody's done it. The fact of the matter is that it's much cheaper to make explosives than robots, and robots would be shut down and tracked down much quicker than explosives can be. This video constitutes a lot of lip-flapping and very little quality information. The concern I have for a military run through AI, with no human intervention, has much more to do with the economic and ecological impact of those weapons than it has to do with the impact on humans. We're already in a situation where an exorbitant amount of money is funneled into advanced weapons projects, and this should only serve to accelerate that. Trying to rein in military expenses in exchange for more robust peace treaties would be a much more fertile area of study.
youtube 2018-04-04T01:5… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxXr6p3-h5zZgZePGF4AaABAg.8ecRe2Z2gO98vX52Uxxone","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzW3d21sud-E-HrJYN4AaABAg.8ebVr1dlSd78jjJ_91IT7a","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzBm_yAJquStsoKJwF4AaABAg.8ebOqOqTXZZ8ebPJMTl5Mv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugy-1gvOJvxS7WgBqXF4AaABAg.8eat0P4kTMI8ekcD1DaV7z","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugx_Dp7z_bCRfowJxJl4AaABAg.8earcRX7r8v8eat8j4cDeR","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugx_Dp7z_bCRfowJxJl4AaABAg.8earcRX7r8v9A5lUlBsvaW","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugw8Yc2mgu9FoHa2_5F4AaABAg.8eaizhUGVh99ELz8n3uRFB","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzZoRdIrjkSS-bAyZ54AaABAg.8eaPGuCKRvw8ekb9xN-LxT","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyMkcekidWccs1Q-Pt4AaABAg.8e_m0ycRZ9-8ea3O4mojMA","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyoScBtzkbFRIA7FKl4AaABAg.8e_bvJOqFk08ed496YpWyi","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]