Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is scary folks no way in hell i would give a tommy gun to a ai robot…
ytc_Ugw0W69Oy…
G
AI learning to “self-replicate and self-exfiltrate”. Hope they come up with effe…
ytc_UgzNilJdN…
G
@DarrylAJones- Really? How much and where can I buy one? It should be a lot che…
ytr_Ugybqrg0B…
G
Bogus, many simply choose not to see because they can't conceive of someone so p…
ytr_UgzdbNCE9…
G
People creating AI love to say "No no no, this will actually HELP you in your jo…
ytc_UgxStXYzZ…
G
My take on this is simple. Dont make robots smart enough to be conscious. Make…
ytc_UgxymhbV-…
G
It's a complex issue indeed. While humans have made significant advancements, th…
ytr_Ugy4M21i-…
G
I hope it just creates a virus that targets humans. the rest of the life on the…
ytc_UgxIFQtC8…
Comment
That PR campaign will be the next serious war the US is involved in, particularly if it's a war against a near-peer adversary and not going great (the most likely case being a US-China conflict over Taiwan.) I think the US will develop and distribute fully autonomous weapons, but not turn them on until some crisis happens which massively swings public opinion (similar to the post-9/11 fervor) and legitimize their use both for that conflict and all future ones. In the meantime, the goal is to make them available for use at a moment's notice, similar to how we already have thousands of nukes waiting on standby.
The lack of ability to prove or disprove whether an adversary uses similar weapons (compared to, say, WMD use) will make it easier still to claim the enemy has used them first, whether it's true or not, so we are only responding "proportionally". Faced between a choice of a hypothetical future loss to AI, or a likely and imminent loss to an enemy in an ongoing war, the public will support their use.
reddit
AI Responsibility
1700987835.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_katap8i","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kars0gr","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_kasm8un","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kar76r1","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"rdc_kardxxk","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]