Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If it goes wrong? Just another control tool. Do you not see what drones do? Comi…
ytc_UgxQDqPxK…
G
We appreciate your perspective. If you're interested in exploring more about AI …
ytr_UgynQxh1R…
G
Facial recognition is not a problem, we all use it everyday. The concerns shoul…
ytc_UgxQTftE7…
G
***** Well, actually, companies need to make things for the people to eat or use…
ytr_UgjvKr1TI…
G
I work for a company that produces this kind of thing. The tech is progressing e…
rdc_jigreih
G
Interessting. I had a Chat wit gpt4 about anagrams and Sam Altmann. And he came …
ytc_Ugyi1Lo14…
G
i think the problem that people forget when analyzing anything that Ai does...is…
ytc_Ugy2kZxIE…
G
WIth AI art there is a skill curve like anything else. The quality of what is sp…
ytr_Ugy7-EQGg…
Comment
The issue with AI weapons is accountability.
Let’s say an AI commits a war crime. What, exactly, do we do? Who is punished? How do we keep it from happening again?
AI should never be used in war till we can account for it.
reddit
AI Governance
1710030600.0
♥ 40
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ku6vs1y","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_ku8cj7p","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_kudq04g","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_ku5hciv","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_ku7nqkl","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]