Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As far as I recall (I did university research on the top as part of my undergrad, but that was a couple of years back), the UK and Australia has signed memorandum's that AI technology used will have Human in the loop systems, although where the human would sit exactly was up to circumstance (ie., would the human designate targets or would humans give permission for the ai to designate them). I thought the US was part of this as well until I read your HARPY break down. I am fairly sure that the EU is also considering it. I am unsure as to how worthwhile Human in the loop designs actually are. Especially as computer advances continue to outpace things like (especially like) legislation. All it really will take is for an opponent to not introduce HITL controls and instead HOTL and well, human casualties will quickly escalated. The time taken for a human to safely select a target is time the AI could have lined up and sequenced 15 other missile strikes. It's not as if humans don't make errors in war, but I believe it is more like the chains of command are followed and there is some sort of accountability in said chain that the AI follows.
youtube 2024-07-01T01:5…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzW6UjiqtO1qYsd-dt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwOhNiPw_AmseWCdrF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwSXopO3bghs8Filf14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy4LUy11iJ_f3SRpJt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZP64I66XNz9VZTn54AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyGkqhxJL6RgVDOzrl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJX81t0Lm-tTcTy0N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzlZkSAafF8AYU4EkV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwz8iEmSQIp78TsEPl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz30RFeeuSOa5znOTh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]