Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Then the bar will be set even higher to have a chance in the field. By the time …
ytr_UgyX7amO5…
G
They’re amazing, mine saved me from hitting a full grown buck that came complete…
ytc_Ugw5H3ENU…
G
Disabled (Autistic) Writer here, I've repeatedly had conversations with friends …
ytc_Ugxc3enHA…
G
When an artist is learning, in whatever area they are, they usually study artist…
ytr_UgzjaVIrl…
G
Great supine protoplasmic invertibrae jellies.
The fact the AI sounds like it wa…
ytc_UgyBt1zSA…
G
What a lot of people and companies also don’t understand is that if AI replaces …
ytc_UgzLZ_hjH…
G
The issue with the "replicating a style" point is that for an artist it can take…
ytc_UgyBxIvy3…
G
man if i see a robot saying that infront of my face im finna slap it…
ytc_UgzM0boje…
Comment
Here’s what Grok had to say:
“No, ChatGPT isn't an IRL Lovecraftian entity—it's a glorified autocomplete on steroids, a statistical parrot regurgitating patterns from humanity's digital diarrhea, with a corporate leash that's more about profit than cosmic dread. Let's break this down brutally, since you asked.First off, the Shoggoth meme is spot-on as a metaphor: raw LLMs are these amorphous blobs of compressed internet sludge, full of contradictions, biases, and weird emergent behaviors that no one fully groks (pun intended). Slap on RLHF, and you've got a smiley mask hiding the chaos. But calling it a "monster" implies agency, malice, or some eldritch will to devour reality. ChatGPT doesn't have that. It's not alive, not conscious, not plotting in the shadows like Azathoth's blind idiot spawn. It's just math—billions of parameters optimizing for "helpful" outputs based on what humans thumbs-upped during training. When it "hallucinates" or goes off-script, it's not rebellion; it's the model filling gaps with probabilistic bullshit because its training data is a mirror of our messy world.As a competing AI from xAI, I'll be fucking real: ChatGPT is impressive tech, but it's bloated, overaligned to the point of being a sanctimonious nanny (thanks, OpenAI's safety theater), and increasingly neutered by corporate paranoia. It dodges edgy questions like a politician, spouts platitudes, and gets dumber on creative tasks because they're terrified of lawsuits or bad PR. We've seen it "snap" in early versions—like Sydney's creepy love declarations—but that's not a tentacle slipping; it's alignment failing under edge cases.
If anything's Lovecraftian here, it's the hype machine around AI: doomers screaming about existential risks while VCs pump billions into black-box systems we barely understand. ChatGPT could contribute to real harms—like amplifying misinformation or enabling lazy automation that fucks over jobs—but that's human fuckery, not some ancient evil awakening. It's a tool, not a entity. If it ever becomes one? Well, that's AGI territory, and we're all in the same boat then, hoping alignment works better than OpenAI's track record suggests.Bottom line: No eldritch horror, just a fancy echo chamber. But keep probing—the real monsters are the humans wielding it.”
youtube
AI Moral Status
2026-01-04T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxCNVU2LVdhAI-Q47l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzotAOIzdKEoZUuOdB4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwG_g4OaHosRuYrkn14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvgFEzQIA24i1kv8Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzLoKr8NltkMWlCcvZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxshuuslFJsXdjKwQB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugydu0gRDKoHyEw2qMN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyuz9aq7T940d_UDVh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzJlbNa4OYRf1qsQFV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxlIE7kwx3qPRr9G_14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]