Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The headlight beams are out of alignment and the video is doctored (darker) As …
ytc_UgxQuN1J4…
G
Ai could wipe out the working class (in America).
Like there is working class in…
ytc_Ugz-Usp_w…
G
"hard problem of consciousness" look it up, that is why you cannot say Ai is sen…
ytc_Ugwg6KRgc…
G
Ai isn’t dangerous to everybody, at least from what I’ve seen, but it can absolu…
ytc_UgzXQV8_N…
G
What is the most bizarre explanation of AI I mean ai is just a bunch of 0s and…
ytc_UgyUulEc1…
G
Tech support won’t go fully AI. It can’t. Due to the fact it doesn’t have the hu…
ytc_Ugw0hLTtV…
G
Yawn.... another AI doomster shilling for a few dollars on any platform that wil…
ytc_UgzZbdEO-…
G
"Im an artist too!"
*Just writes a prompt, while the actual artist, which is t…
ytc_UgzdCWVUL…
Comment
This is my view as well. LLMs are incredible in their own right at certain tasks, but the next generation of architectures that implement online learning in ML models are going to be more human-like in capabilities and consistency:
See
[Hierarchical Reasoning Model](https://arxiv.org/pdf/2506.21734)
[Energy Based Model](https://alexiglad.github.io/blog/2025/ebt/)
[Test-time Training](https://test-time-training.github.io/)
Yann LeCunn’s JEPA and Francois Chollet’s program synthesis could also be potentially promising paths towards AGI.
I personally put a 50% chance of AGI in the next 10 years, 70% in the next 20 years and 90% in the next 30.
reddit
AI Responsibility
1754871447.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_n7zg0ur","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_n812lqr","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_n7z3b72","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_n7yu0sy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_n7zajfa","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]