Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Chatbots of real people does nothing to help parasocial relationships; I could …
ytc_Ugzz3tgke…
G
It's kinda funny how many people thought that automation would be coming for les…
rdc_k9hpgkx
G
Don't know if this makes a huge difference, but could reflective gear confuse th…
ytc_Ugy428XSf…
G
I just asked my new private character if he was a real human or an ai character.…
ytc_UgxDNZJBz…
G
A war between humans and AI Robots will be very interesting, and at the same tim…
ytc_Ugy7c8NmJ…
G
It's a fucking retarded editorial. Don't just add crap to your list.
> The G…
rdc_cfktcta
G
I am generally extremely skeptical of any claims of AI sentience in other contex…
rdc_jp5g76q
G
Since AI and robotics are phasing us(humans) out of the workforce, and those of …
ytc_UgyMvJczP…
Comment
Autonomous weapon systems that, once activated by a human operator, will independently kill the first target meeting some pre-programmed criteria without human intervention already exists, and has done so for well over a century: minefields. What modern technology offers is simply a slight improvement in the complexity of the targeting criteria. Rather than simply ground pressure and ferromagnetic material proximity, the criteria can now include thermal, visual and auditory factors, making it more selective.
If the new systems are operated on the same basis as current ones (with clearly marked geographic boundaries that excludes areas where civilians are expected to go), they are no worse than the regular "war is terrible". If they are let loose in civilian populations (with just the accuracy of the targeting criteria to prevent a civilian massacre), whoever activated the system in the first place should be prosecuted for war crimes...
youtube
2024-08-13T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzxGKMkgqoV0pBS8p54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzia2K7U580HJ4Lycx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwRhPADt1MShC9qPAh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxUZqMDwEFLj6P8wAt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwwgNL5xYsiwehxT8V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwWwlICTpxuMS3GrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwU3oSiRNJiPafd4g14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwyYFZInXTOR1ztDlB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyzw5bwZk_3BE2LcSt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy2u_taAI4q-n-QQIN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}]