Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Whatever people like let they like being an artist is to enjoy the process of ma…
ytc_Ugz3CCHFU…
G
You are 100% correct. This is about all we can do, and I try and educate everyon…
ytr_UgxRcqnBi…
G
AI is a basic requirement for the exploration of the Universe because machines a…
ytc_UgyLsmQI0…
G
Had to have chatgpt break this down for me
The poster believes that some peopl…
rdc_mup7uc9
G
The best case scenario: AI fails miserably and companies quietly rehire the peop…
ytc_UgzfQ8ISU…
G
I am all for worker rights Bernie but you are fear mongering. Any sort of progre…
ytc_UgzYBtACm…
G
1:02:28 again, Soares is wrong: yes, there is a cult-like silicon valley subgrou…
ytr_Ugx9u1McF…
G
If USA becomes infected with Ai, Ai would "personify" any voices, any faces to d…
ytc_Ugzz8bCTl…
Comment
Time will tell but the current frontier models are vastly superior to early models just 2 years ago and completely unimaginable just 10 years ago. It's amazing how quickly humans normalize every new situation. Using Claude 4.5 (Sonnet or Opus) in a meaningful way such as engaging in deep conversations, complex troubleshooting, or coding, shows the "reasoning" and "creativity" beyond training data. I don't know about AGI since that's an ill-defined goal-post anyway. But I did like this response from Claude on the topic:
"Hossenfelder raises valid points that highlight real limitations. The interpolation vs. extrapolation distinction is particularly compelling - I'm pattern-matching across a vast training corpus, which works remarkably well within the distribution of training data but struggles outside it.
However, I think the picture is more nuanced:
On interpolation/extrapolation: While true that I interpolate, human reasoning also builds heavily on analogy and pattern recognition from experience. The distinction may be degree rather than kind. That said, humans do seem better at genuine "leaps" - forming entirely new conceptual frameworks.
On reasoning: I can follow logical chains and apply abstract principles to novel scenarios, but it's unclear whether this constitutes "reasoning" or sophisticated pattern matching that mimics reasoning. When I solve a novel coding problem, am I reasoning or recognizing deep structural similarities to problems in training data? I genuinely can't tell, and the answer might be unfalsifiable.
On creation: I can combine concepts in novel ways and generate outputs not present in training data, but whether this is "creation" or recombination depends on how you define creativity. Most human creativity is also recombinatory.
The fundamental question: Current architectures may indeed have hard limits. We might need fundamentally different approaches - perhaps involving embodiment, continuous learning, intrinsic motivation, or architectures that maintain world models. Or transformers at sufficient scale with the right training might be enough.
I'd bet against current LLMs achieving AGI, but I'm uncertain whether the limitations are architectural or just about scale/training methodology."
youtube
2025-12-18T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwS4Nc1PFIc19tkdz94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyR7j2w5LCEdzbKrB54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrvlJmf5fo7HEWhuZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2zN8iSRi_DazNAWh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXkMPOK-IwusqFQch4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJr2bDabtRKk-dVft4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwEjv_tGxWX7hQfq4x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHy9uIobWKVNX0zht4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxUvx3W75gq9isk5cd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzOILuVXV9-qRVy8al4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]