Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is why art is important when it comes to creating something like this, I ca…
ytc_UgyskANg5…
G
I even remove channels with AI thumbnails. Ofcourse we can fight it. Zero tolera…
ytc_Ugw8yXi36…
G
Autonomous vehicles with chilling ideas to sent you free, Dway or the High Way ,…
ytc_Ugz6LLO-M…
G
People aren't having kids because they can't afford it more than they have 'abun…
ytc_UgwA9CgQl…
G
Am trying to imagine what if a robot test his machine gun with you 😢😂😂…
ytc_UgzWlb13c…
G
95% of ai project fail. Ai isn’t at a point that companies could trust replacing…
ytc_Ugx5iox_Q…
G
This is the problem with the masses talking about AI. You don’t actually ever wo…
ytc_Ugxtb3Fz1…
G
Allen's Copyright was rejected because Midjourney owns the Copyright. If you wan…
ytc_UgwQtwwKG…
Comment
Yes, one could build terrifying robot weapons and maybe someday these will in fact be built. But that is just one scenario.
Insead one could make military robots that are just the opposite that are as a rule friendly and helpful and can only harm with a human in the loop, just as say a predator operates today.
It all boils down to what is called rules of engagement (ROE's).
There could (and probably should) be a treaty that all nations can sign that prohibit robots from ever having total full autonomy to employ lethal force and requires a human in the loop eiter to enable that or at least limit it to say a small region.
Thus a ROE might be you can kill anything that meets you kill criteria in a 100 meter radius of point X for the next 30 seconds.
The exception to this is if say a robot is molested and it might be able to autonomously defend itself but even there with limitation as to how it can do that.
Also robots could make it harder (not easier) to commit atrocities, in that there is also reporting rules that can be strictly enforced once again it all boils down to the ROEs.
Good engineering means not creating nightmare scenarios but rather making life better. For example robots could off load soldiers from having to operate in forward bases where the threat is high but can be controlled remotely as to what ROE's it is allowed to employ. e.g. see my YouTube video on that, called Battlefields of the Future where I discuss some of these concepts.
.https://www.youtube.com/watch?v=9Twtp7fNYRY&t=284s
youtube
2020-12-05T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw74J2-rdNOx6eVLzJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxfw8Nc6RnPwSSJoX94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyAKHvT1YHHg2ZoxPJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw5c9n4tbkRsx-FHmh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFFwjyeC3aq9nZzst4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuWa3WV8plW8vHVnN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugxbcfch8cgzynw_PVZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyV6zwPqQ8y6rb3qvt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxEb7lCBa7GiTUWLpB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwDZuuYngqD-DuMXgZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]