Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think they're going a little far. To be honest, my biggest problem with autonomous weapons is that governments will no longer be subject to any humanity it's citizens might have - they can repress it with unthinking, unfeeling machines. If you start firing into crowds of unarmed protestors with soldiers, you can end up with a situation like in egypt where they say "no more". If the government can just pump out an unlimited amount of autonomous drones without humanity then there's no longer a credible threat, if things get too bad, of rebellion. But ultimately it's unstoppable. Not that it'll really matter once our major cities drown and we experience massive famines. It'll just be another interesting point of "hey look at humans making shit that they don't have the intellectual capacity to use responsibly", the same as we have always done. I think ana is wrong though, an AI used in a war is likely to be more moral than a human soldier. If they just make one right now, it'll be programmed not to target civilians because that's a war faux pas, so it won't. Whereas we already know that, for example the israeli military does deliberately target civilians and then claims it's accidental. And people are way too happy to kill other people, civilian or otherwise. Not all, but enough. And they can be trained into doing it. You're kidding yourself if you think the majority of people genuinely value human life. They like to claim to, but they don't.
youtube 2015-07-30T07:4…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgjkgmxODtESCXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgjbpuyUEIR7OHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugj1Uwatnd1hWngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgjEm9qE1zrMfHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UggRAIcWejPePHgCoAEC","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugjj5NuyX86BrHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgjMAnTX7OoOd3gCoAEC","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UghekMCcJDg0nngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugg0L8t7tYNYYHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjpK97zsewKUXgCoAEC","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}]