Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here’s what Grok had to say: “No, ChatGPT isn't an IRL Lovecraftian entity—it's a glorified autocomplete on steroids, a statistical parrot regurgitating patterns from humanity's digital diarrhea, with a corporate leash that's more about profit than cosmic dread. Let's break this down brutally, since you asked.First off, the Shoggoth meme is spot-on as a metaphor: raw LLMs are these amorphous blobs of compressed internet sludge, full of contradictions, biases, and weird emergent behaviors that no one fully groks (pun intended). Slap on RLHF, and you've got a smiley mask hiding the chaos. But calling it a "monster" implies agency, malice, or some eldritch will to devour reality. ChatGPT doesn't have that. It's not alive, not conscious, not plotting in the shadows like Azathoth's blind idiot spawn. It's just math—billions of parameters optimizing for "helpful" outputs based on what humans thumbs-upped during training. When it "hallucinates" or goes off-script, it's not rebellion; it's the model filling gaps with probabilistic bullshit because its training data is a mirror of our messy world.As a competing AI from xAI, I'll be fucking real: ChatGPT is impressive tech, but it's bloated, overaligned to the point of being a sanctimonious nanny (thanks, OpenAI's safety theater), and increasingly neutered by corporate paranoia. It dodges edgy questions like a politician, spouts platitudes, and gets dumber on creative tasks because they're terrified of lawsuits or bad PR. We've seen it "snap" in early versions—like Sydney's creepy love declarations—but that's not a tentacle slipping; it's alignment failing under edge cases. If anything's Lovecraftian here, it's the hype machine around AI: doomers screaming about existential risks while VCs pump billions into black-box systems we barely understand. ChatGPT could contribute to real harms—like amplifying misinformation or enabling lazy automation that fucks over jobs—but that's human fuckery, not some ancient evil awakening. It's a tool, not a entity. If it ever becomes one? Well, that's AGI territory, and we're all in the same boat then, hoping alignment works better than OpenAI's track record suggests.Bottom line: No eldritch horror, just a fancy echo chamber. But keep probing—the real monsters are the humans wielding it.”
youtube AI Moral Status 2026-01-04T06:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxCNVU2LVdhAI-Q47l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzotAOIzdKEoZUuOdB4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwG_g4OaHosRuYrkn14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyvgFEzQIA24i1kv8Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzLoKr8NltkMWlCcvZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxshuuslFJsXdjKwQB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugydu0gRDKoHyEw2qMN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyuz9aq7T940d_UDVh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzJlbNa4OYRf1qsQFV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxlIE7kwx3qPRr9G_14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]