Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is something that has been on my mind : How is AI going to impact human re…
ytc_UgzWUL3g2…
G
the situation isnt even moving it was stopped in its tracks the moment it happen…
ytc_UgwL_58rZ…
G
What is usually talked about is the use of AI by rogue/authoritarian states or b…
ytc_Ugw-FXVgB…
G
I mean, we already knew that we are fucked, so It does not even matter if we all…
ytc_UgwpK64q2…
G
Another agreement is that the NEED and morality. The need for Ai to satisfies so…
ytr_UgyJnff8h…
G
An AI shouldn't have the same rights of a human no matter how "fair" it is which…
ytc_Ugx4GSCuW…
G
The only issue one should be cognizant of is corruption! Covid proved that all o…
ytc_Ugw6SqTSL…
G
I don't know about artificial intelligence but the Americans definitely know all…
ytc_Ugznj90sf…
Comment
AI is an incredibly useful tool, but that's all it should ever be, a tool. having AI be the final decision maker is an awful idea.
for example, if you gave an AI all the data on a conflict you currently possess, and ask it how to ensure your side's victory, there are two immediately evident possibilities.
1. disengage from the conflict/surrender since you technically don't lose, you just don't win.
2. violate the Geneva conventions. kill countless civilians, use false-flags, and resort to bioweapons.
these two options, unless specifically ruled out, will almost certainly be the outcome, in option 1 you remove yourself from the conflict at the cost of your own authority, and in option 2, you ensure your enemy can't survive even if they win.
reddit
AI Responsibility
1648700230.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_i2sd5ts","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"rdc_i2u4e7o","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_i2s40i9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_i2t3ztm","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"rdc_i2s307w","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]