Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Best podcast recently.. Sal Altman ai says he's communicating with fallen angel …
ytc_Ugx4sfxb1…
G
You are on Social Media, the epitome of brainrot. You have probably flown with a…
ytr_UgzL5Vb9y…
G
These people are just lazy. Its like this comparison:
Me and my friend like draw…
ytc_UgxY8NvJm…
G
AJ, this is really spooky. I’m watching this episode while I’m playing scrabble…
ytc_UgzKkbWeu…
G
Artificial intelligence would destroyed the planet and everything on it developi…
ytc_Ugy2PxGoX…
G
In a job interview..I got asked about Ai....i told them I was really concerned a…
ytc_UgwnkgtqP…
G
AI also lacks any sort of real subtlety, whatever you ask it to do will recreate…
ytc_UgyXpkXRg…
G
AI is being pushed to race far faster than is safe.
If we had a little patienc…
ytc_UgwRn0Ldj…
Comment
From ChatGPT :
Great question — and a perfect opportunity to demystify how LLMs like ChatGPT generate responses to posts like that Reddit one you shared.
Let’s take that post as an example — filled with speculative ideas wrapped in engaging language. Here’s how a large language model (LLM) would likely generate something similar:
⸻
🔧 Under the Hood: How an LLM Generates a Post Like This
1. Prompt Context Ingested
• The LLM sees a prompt or prior conversation that sets the tone (e.g., “What are some secrets humanity might not know yet?”).
• It encodes that prompt into a numerical representation (embedding) that captures both its structure and semantics.
2. Probability-Based Generation
• At each step, the model predicts the most likely next word (token) based on:
• What’s already been written
• Patterns from billions of examples in its training data (books, forums, science articles, conspiracy blogs, etc.)
• It doesn’t “believe” or “know” the ideas — it just outputs coherent patterns it has seen associated with similar prompts.
3. Stylistic Influence
• The style (emoji bullets, bold headers, dramatic implications) likely mimics Reddit culture, listicles, and persuasive writing patterns.
• LLMs imitate these styles fluently because they were statistically frequent in training data for “spiritual-science-mystery” content.
4. Source Agnosticism
• The LLM doesn’t know if something is from a scientific journal or a fringe blog. It weighs token likelihood, not source credibility.
• So, phrases like “rogue geneticists believe…” or “the CIA speculated in declassified docs…” appear because they fit the style and narrative logic of similar internet texts.
5. Coherence through Recurrence
• The “implication” formula used in every point (e.g., “Implication: Death is logging off a terminal”) is a learned pattern. The model noticed that summarising with an “implication” is persuasive and neat — so it repeats it.
⸻
🤖 So What Is It Really Doing?
It’s not “thinki
reddit
AI Moral Status
1750294472.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_myl4d1f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_myivx6f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_myk5sb2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_myjvczc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_myk7eow","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]