Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why would you fight a robot it studies all forms of fight has no fear it's metal…
ytc_UgwKMtUNT…
G
Cool topic ! I can relate to, the thing is AI speed things up. I am running a sm…
ytc_UgzzaYWHe…
G
Oh, i love trauma dumping and advice seeking while waiting the 4-6 months for th…
ytc_Ugw3x94rS…
G
@onepunchboi8526 8 dont think i mentioned ai in this comment buddy. This comme…
ytr_Ugzd6qcPU…
G
Ai isn't some sort of malevolent hateful spirit, it's man made and man should be…
ytc_UgwVMDlxm…
G
"AI CEO says his product can change the market so that investors pump their mone…
ytc_UgzVbRXiv…
G
The chat bots don't recall conversations with other people when they're speaking…
ytc_UgxwTr9ED…
G
Learned this last week:
62 people control the equivalent of half of the world's …
rdc_d7khw1q
Comment
Massive armies disciplined by wage labor are a thing of the past. Today, soldiers accustomed to playing video games operate semi-autonomous war drones thousands of miles away from the theater of operations.
Will militarized Artificial Intelligence command autonomous war machines and human combat units in the future? That is possible, but the specter of unpredictability will never cease to haunt warfare.
The worst military defeats have been the result of stupid calculations. The Trojans accepted the gift of the Greeks; the Romans hired Alaric's barbarians as mercenary troops. Napoleon and Hitler underestimated the Russian winter. The Americans trusted the Afghans.
Can the use of AI (Artificial Imbecility) also backfire? The answer is yes. On a battlefield, everything is dynamic and unpredictable. Incorrect actions can result in strategic advantages; Following a plan never guarantees that an accidental victory for the enemy is impossible.
A group of soldiers who do not understand the command they have received from their officers jeopardize the success of their country. Armies will never be fully automated. Therefore, no matter how good a militarized AI is, its success in war can be undermined by human error.
In 1983, Stanislav Petrov avoided a nuclear war by refusing to accept that American missiles had been launched against the USSR, despite the indications given to him by computers. In the future, the recurrence of the “Pretov effect” may prevent the scenario from the movie Terminator. But if that does not happen… it is better not to even think about it.
youtube
2024-07-25T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwX_wLvYWlgPQfI5kB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2veih0oEdOx_NYSx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzNX4ie-t9hZynecml4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7GAjE128VgBOaNDt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwcJT7O9cgixzcVtMh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-QhvLGhkv9JaG9Ol4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwsj_CeyrMZqu45fRJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwNrYByxVy9v-KqVGN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgytaH3zApinIW2fWKZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_jtRA2yhnUvEXv2h4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]