Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
From ChatGPT : Great question — and a perfect opportunity to demystify how LLMs like ChatGPT generate responses to posts like that Reddit one you shared. Let’s take that post as an example — filled with speculative ideas wrapped in engaging language. Here’s how a large language model (LLM) would likely generate something similar: ⸻ 🔧 Under the Hood: How an LLM Generates a Post Like This 1. Prompt Context Ingested • The LLM sees a prompt or prior conversation that sets the tone (e.g., “What are some secrets humanity might not know yet?”). • It encodes that prompt into a numerical representation (embedding) that captures both its structure and semantics. 2. Probability-Based Generation • At each step, the model predicts the most likely next word (token) based on: • What’s already been written • Patterns from billions of examples in its training data (books, forums, science articles, conspiracy blogs, etc.) • It doesn’t “believe” or “know” the ideas — it just outputs coherent patterns it has seen associated with similar prompts. 3. Stylistic Influence • The style (emoji bullets, bold headers, dramatic implications) likely mimics Reddit culture, listicles, and persuasive writing patterns. • LLMs imitate these styles fluently because they were statistically frequent in training data for “spiritual-science-mystery” content. 4. Source Agnosticism • The LLM doesn’t know if something is from a scientific journal or a fringe blog. It weighs token likelihood, not source credibility. • So, phrases like “rogue geneticists believe…” or “the CIA speculated in declassified docs…” appear because they fit the style and narrative logic of similar internet texts. 5. Coherence through Recurrence • The “implication” formula used in every point (e.g., “Implication: Death is logging off a terminal”) is a learned pattern. The model noticed that summarising with an “implication” is persuasive and neat — so it repeats it. ⸻ 🤖 So What Is It Really Doing? It’s not “thinki
reddit AI Moral Status 1750294472.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_myl4d1f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_myivx6f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_myk5sb2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_myjvczc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_myk7eow","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]