Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes, one could build terrifying robot weapons and maybe someday these will in fact be built. But that is just one scenario. Insead one could make military robots that are just the opposite that are as a rule friendly and helpful and can only harm with a human in the loop, just as say a predator operates today. It all boils down to what is called rules of engagement (ROE's). There could (and probably should) be a treaty that all nations can sign that prohibit robots from ever having total full autonomy to employ lethal force and requires a human in the loop eiter to enable that or at least limit it to say a small region. Thus a ROE might be you can kill anything that meets you kill criteria in a 100 meter radius of point X for the next 30 seconds. The exception to this is if say a robot is molested and it might be able to autonomously defend itself but even there with limitation as to how it can do that. Also robots could make it harder (not easier) to commit atrocities, in that there is also reporting rules that can be strictly enforced once again it all boils down to the ROEs. Good engineering means not creating nightmare scenarios but rather making life better. For example robots could off load soldiers from having to operate in forward bases where the threat is high but can be controlled remotely as to what ROE's it is allowed to employ. e.g. see my YouTube video on that, called Battlefields of the Future where I discuss some of these concepts. .https://www.youtube.com/watch?v=9Twtp7fNYRY&t=284s
youtube 2020-12-05T21:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw74J2-rdNOx6eVLzJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxfw8Nc6RnPwSSJoX94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyAKHvT1YHHg2ZoxPJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw5c9n4tbkRsx-FHmh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwFFwjyeC3aq9nZzst4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuWa3WV8plW8vHVnN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugxbcfch8cgzynw_PVZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyV6zwPqQ8y6rb3qvt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxEb7lCBa7GiTUWLpB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwDZuuYngqD-DuMXgZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]