Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@bymegangrantExactly, It's a great research tool but I agree with you that nobo…
ytr_UgwiaPjRz…
G
AI cant use your art if it isn't online! So draw, paint, sell at galleries or ar…
ytc_UgzsBvUVH…
G
I do not support peoples using AI and claim to have made the drawing without tel…
ytc_UgxOITDkp…
G
It boils down to how and to what degree you let AI in your code. If you treat it…
ytc_UgwZ6oD_k…
G
No, all software is not AI. Most software, including the lowly spell checker, co…
ytr_UgyIMwVT6…
G
I’m a program analyst, not entry level, and a simple AI agent could easily repla…
ytc_Ugx76GJI1…
G
Goodbye to any privacy with a robot in your home, as it synchs to the AI Cloud 😬…
ytc_UgzWiF4dU…
G
The AI robots, and the central super-brain, will control the banking system ... …
ytr_Ugwcyuc0S…
Comment
I’m 100% going purely on gut feelings with this, and this is all rhetorical from me because I don’t have it in me to research it tonight - but I think the use of humans to perform the actions adds a sense of weight/gravitas/stakes to it. We SHOULDN’T just have neverending wars with robots. Not that human life has ever prevented us from neverending war now…but it *could*. It should.
We SHOULD have a human who can reconsider their action before pulling the trigger at their leader’s order. We SHOULD have a human who’s looking another human in the eyes before they kill them. We SHOULD consider the value of human life in direct 1:1 comparison to another human, and not just compared to the economic repercussions it may cause.
I mean, the whole debate of the Nuke was “one American dropping one bomb to kill 250k (and then another 1+1 for 150k) Japanese people was more valuable than sending millions of us to kill millions of them”.
Without that human element, what’s to stop us from just permanent war? What happens when we *don’t* have the “million Americans” comparison - do we just send a million robots to do it? And then another million because the enemy also has a million? Would we ever have a “decisive action” that ends a war ever again? Do we send a million robots that cost a trillion dollars and kill millions of humans anyway?
And if everyone’s leaders dehumanize war (more than they already have, or at least the current US admin has), there’s just no telling what would happen with that.
At least, not until the entire world reaches “let’s avoid Mutually Assured Destruction, yeah?” agreements with AI robots too. We’ve already done this whole “arms race that ultimately results in nothing happening anyway” thing once before with the nukes, and all it did was cost money, kill hundreds of thousands of people and devastate millions more, and cause regret around the world. Do we really have to do it again?
And if it DOES happen again…who pulls the trigger first? The
reddit
Viral AI Reaction
1776893766.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohvwc3g","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_ohpbljd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_ohpi3ky","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"rdc_ohprhgl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_ohsk2ob","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]