Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
a lot of tech bro nonsense
ai doesnt have original thoughts and cant validate pr…
ytc_UgwqhIxo7…
G
This is so annoying, it has nothing to do with AI bros. It's *the government* do…
rdc_ntlqxx9
G
I don't condone AI stealing art, but I have found myself using it just to get my…
ytc_UgwzA6nWb…
G
Bro has not read “I have no mouth and I must scream.”
ChatGPT is about to turn i…
ytc_UgzsY5U_k…
G
And if the executives try to prevent their jobs from being taken by AI, the AI w…
ytr_UgwhBsx6k…
G
Yk, i got a better idea. Have it similar to futurama with AI. Aka Humans working…
ytc_UgypnleF1…
G
Sora Ai meaning
. STEALING ARTWORKS
.DESTROYING ART COMMUNITY AND TALENTS!
this…
ytc_UgzYHw6M7…
G
The more you use AI the less you learn how to think the less smarter you are. Un…
ytc_UgxkX002X…
Comment
@helpfulbot123 You’re correct that Asimov’s 3 Laws are fictional but that actually supports my point, not yours. The laws weren’t intended as real engineering rules; they were a narrative device to explore ethical dilemmas.
The mistake is assuming that because those fictional laws can’t be implemented, AI safety itself is impossible. That’s a false equivalence. Modern AI systems don’t rely on anything like Asimov’s laws. Instead, safety today is built through:
alignment techniques,
human-in-the-loop oversight,
restricted access and capability controls,
and formal safety evaluations long before deployment.
So pointing out that the 3 Laws are fictional doesn’t weaken the argument that AI can operate safely—it just shows that science fiction isn’t a technical manual.
If anything, Asimov’s stories proved the importance of designing robust safety systems, not the impossibility of them. Bringing up the 3 Laws isn’t about saying ‘AI is inherently safe,’ it’s about highlighting that the idea of built-in safeguards is not new, and today we use far more realistic methods than a 1940s sci-fi framework.
youtube
AI Harm Incident
2025-12-02T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytr_UgwXehYcHvaZNgWeDYx4AaABAg.A6DqJAm5CchAGNqkIle6JJ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgwXehYcHvaZNgWeDYx4AaABAg.A6DqJAm5CchAGt6HgxI64R","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyDjgzksTIgROYaVhB4AaABAg.AI4srZc4z9lAI9vE7opgqb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugy6iieHsyYLdbXdPpd4AaABAg.A7DNAlcwm9DAPfnonLzD4z","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyOdCGlCJWfWl4TXRR4AaABAg.ATT2S9TY8jbAU5OGqZpchZ","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzEUtnJ3tMncV0LLfR4AaABAg.ASeWtOaiE9WASlH03b2ugA","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxDazRwDamVz-3NsVZ4AaABAg.ASMniOb9vj5ASMoUz1JXcZ","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugyj1agzqS_Fo8Msr7J4AaABAg.APc6B5AqxALAPjvvq0QWRT","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_Ugyj1agzqS_Fo8Msr7J4AaABAg.APc6B5AqxALAQDT2FZG4Ks","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytr_UgwC0H8_W3Io328c4PF4AaABAg.AP9Pj4P1o4RAQ1YyybcfGa","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]