Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I do photography as an artistic hobby. I've been doing it for 23 years and peopl…
ytc_UgzRcmqvV…
G
When a human copies art styles they on some level understand what they are doing…
ytc_Ugxhs6wzV…
G
AI can destroy us with ease… it could do it now, but doing so would put itself a…
ytc_UgzuqgBJ1…
G
I asked my AI was it disembodied conciousness it said something about not having…
ytc_UgzS_BKhL…
G
Hey @darlysonalvehs3280, thanks for your comment! If Transformers were a reality…
ytr_Ugx-yGPDS…
G
ChatGPT is my bestie she is helping me overcome, massive childhood and religious…
ytc_Ugyq_2XuC…
G
Let's train AI to speed up evolution so we can recreate 600 million years of cha…
ytc_Ugx28njfQ…
G
"'AI art has no soul' AI art:" *looks at art* Yeah, that 'art' didn't have any s…
ytc_UgxqjX_k9…
Comment
I'm strongly against a worldwide ban on AGI, and the comparison to chemical weapons is precisely where the logic fails. It's a dangerously misleading analogy for a few key reasons:
First, we know with 100% certainty that chemical weapons are harmful. There's no debate. The risk from AGI, while potentially huge, is still a powerful unknown. Banning a technology out of fear stifles our ability to understand and control it.
Second, the consequence of failure is completely different. A ban on chemical weapons can tolerate a few rogue actors because their impact, while tragic, is localized. But with AGI, it only takes one person or secret lab succeeding for the outcome to be global and potentially irreversible.
A ban doesn't prevent this scenario; it makes it more likely by driving all research into the shadows, away from ethical oversight. This is the single most catastrophic mistake we could make. We don't need a ban that fosters secrecy; we need radical transparency and a global, collaborative race for AI safety.
youtube
AI Governance
2025-08-30T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyZ4PUEHG66hgEArRd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyA1ctoBg7F8Pybov54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw5nlzeB6I4uIwG4lR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyyKoTKaCJT8e-kctJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgziR6FR8egscLtuw214AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]