Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I had to test out AI generation software and I was shocked - and completely unde…
ytc_Ugyo8A9h5…
G
I've just started the video, but my main concern about A.I. right now is how it …
ytc_UgyGrgrKN…
G
Nothing impressive about ai recognizing images and searching for relative inform…
ytc_Ugy-S0O59…
G
I find that when I get a dumb AI, that repeating "I wish to talk to a person ple…
ytr_UgyMT0f7f…
G
I hit a conversative with this I was like "So what if the mother is dying and th…
rdc_dcwqhbs
G
Govt been lying and blackmailing mfs for so many years and now we're surprised A…
ytc_Ugxe5LcBG…
G
Or, maybe AI is just not suitable for how it is being used, no matter how it is …
ytc_UgyDbHadg…
G
You know what’d be funny? If some of these automation companies were just minimu…
ytc_UgwTp54_4…
Comment
However scary LAWs may seem, we're still talking about weapons, which purpose is to destroy, kill or maim. This puts these equipment in a specific context where the risk of destruction or lethality is not only factored in but pursued. As such, they don't represent such a fundamental shift but another step towards greater efficiency in achieving tactical objectives. One could make the case that LAWs could eventually prevent friendly fire, which is likely to remain a greater risk and bigger killer of troops than loitering weapons. As far as the greater risk posed to civilians, we have ample and daily evidence that humans do not need bots to target willfully unarmed people. LAWs won't change anything in that regard. So let's not get confused by the "weapon" in LAWs and be clear that the actual risk will come from AI-led autonomous equipment deployed for mundane operations, as far remote from military conflict as can be, that for reasons of misalignment or hallucination start chopping humans instead of picking up potatoes, or causes trains to crash into each other rather than orchestrate their smooth travel.
youtube
2026-03-11T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyShxlBMXmwUeb47Ml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQIiLNX5LlyFLQPNp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-ljCMPkKCBFTqprt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwCbQ32d9CErkaM57R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzu8FpW7D0LnOwKaB94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxs6mgMafn0SVm2X514AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgysX8jMPkKJLalfXI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwZyGNN85JJ2D4IC_F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgznvGQeDAEWJs7uv6x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwRO_K27xt6CJ-HcUN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"unclear"}]