Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Are you against AI now?
Bernie was also against automobiles, he feared horse car…
ytc_UgwIylNg8…
G
i'm not that good at drawing, but i would never use AI art, when i am doing bett…
ytc_UgzKCzmdj…
G
@Missy-in5nf if you had any brain you would understand what I'm saying is I don…
ytr_UgwnQgYFW…
G
Have you seen any of the footage of the war in the Ukraine breadman, they're abs…
ytr_UgxZySuRo…
G
so WHY it's OK for America to reasearch on AI, and it's NOT OK for other country…
ytc_UgyMrik_T…
G
I think the parents have a good chance of winning if they can find the manufactu…
ytr_Ugw6Z5ghJ…
G
I feel sometimes i have fallen into this trap skipping the process of brain proc…
ytc_Ugyh2vo0j…
G
That’s not AI controlling us. That’s us controlling us. All those stuff u listed…
ytr_Ugx7vcfCb…
Comment
That is an incredibly long output, wow, but after stopping to read a couple of random sections, it doesn't seem super impressive.
Literally the first thing I saw is outdated info:
>Emerging Capabilities in the Next 6–12 Months
>
> ...
>Google’s Gemini is expected to be multimodal from the outset
>
> ...
>Context window lengths will likely continue to expand. After models like Claude demonstrated handling 100K-token inputs, we might see mainstream LLMs capable of reading and analyzing entire books or large codebases at once.
Gemini 1.5 Pro was released a year ago and was both multimodal and able to handle up to 10m tokens. Searching the text for "1.5" and "Gemini" shows it did not mention Gemini 1.5 Pro at all, and basically said nothing about Google’s Gemini models at all.
Here's another random snippet I found that is outdated or poorly phrased at best:
>For formal reasoning or coding, some systems have “analysis” modes (e.g. GPT-4 has a mode where it can output thought in brackets that the user might not see). If you have access to such features (often via an API or specific interface), leveraging them can be powerful. But in a typical chat, an effective approach is to explicitly ask for structured output.
Analysis modes? I don't think anyone has ever used that phrase, and this doesn't seem to be referencing the 'reasoning process' by o1, o3, R1, etc. or even the analysis that Deep Search itself is supposed to do.
It mentions "Reasoning-Focused Models or Modes" without mentioning any of the actual known "reasoning focused models" like o1 and R1. This specific section of the report seems to be mainly hallucinations that are vaguely based on a few outdated ideas.
For someone who hasn't been paying any significant attention to the AI space in years (and who isn't looking for the most specific and up-to-date info), this report is probably pretty good. But for me, it seems lacking.
I'm guessing a different prompt, one more focused on
reddit
AI Moral Status
1739023836.0
♥ 14
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mbnry5r","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_mc7lhim","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"rdc_mbq9i6x","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_mbqkv5a","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_mcf1us8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]