Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think LAWs themselves as described here are really that scary. What scares me is not when an AI identifies a target from training data, it's when an AI makes higher-level strategic decisions around where its fleet of autonomous weapons goes and operates as well as their engagement parameters. It's the next logical step after making the weapons decide how to engage targets, but it's also a seriously difficult thing to make happen. I would compare LAWs that just identify targets as being like muskets or other single-shot gunpowder weapons while AI that makes strategic and operational decisions to be like a self-loading firearm. They both use gunpowder/AI and they're both dominant, but the second is very obviously a huge advancement from the first; it requires a lot of small advancements to become possible. But when it does? I'm scared.
youtube 2024-07-03T00:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyrfSOqNX0lDZn1nyR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyzNHKBWwYs2JZI27p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzV9nyMDjmRs2iNTs94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyDRziKgo1zUyF_rqp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwo7cL3udsdsb2687V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyLa4PNGPTQJRqz5tt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxOJhsc7lmkcHenonF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwwmn_MHcaY-SHSr2d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwnPrlRV9YpgQHJK_B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwrtQPIu0q07orwPjp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"} ]