Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not Microsoft's AI, which they will bundle in an attractive package and sell to …
rdc_oh2eo7z
G
No matter how great the tool, the outcome of the work depends on the person usin…
ytc_UgwYYYbzM…
G
The only way you could tell it’s AI is the lack of expression in the voice, but …
ytc_UgxM7BMnl…
G
Huge electrical energy demands...when the power grid is compromised AI will have…
ytc_UgzA1VEHL…
G
Hal or whatever the male robot's name is, already seems to be mad at humans haha…
ytc_Ugx18aMzx…
G
This isn't something to joke about we should take this ai seriously these will b…
ytc_Ugz1D0tsG…
G
This is incredible… if all the schools in America operated like this I wouldn’t …
ytc_UgwVdduXJ…
G
Correction, there was never a good use for ai, there was no "good mission that w…
ytc_UgzE-NcbG…
Comment
AGI which will remain two years away for the next 50 years. While LLMs excel at mimicking human-like reasoning by processing large amounts of data, they lack deeper, context-aware judgment or real-world experience. They rely on patterns rather than true understanding or abstract thinking like humans. Because LLMs have access to vast amounts of data, they can generate responses that mimic nuanced understanding, even if that understanding is purely statistical. This “guessing” is sophisticated enough to produce human-like interactions and reasoning patterns—but without actual awareness, logic, or intent. Achieving AGI will likely require entirely new architectures, integrating diverse forms of intelligence that extend well beyond language. While LLMs could contribute components to a broader AGI system, a pure language model alone probably won’t reach AGI.
Here’s a list of reasons why some AI professionals might believe AGI is just a couple of years away:
Attracting Investment: Hype around AGI draws funding and media attention.
Rapid AI Progress: Recent breakthroughs make AGI feel within reach.
Exponential Scaling: Belief that larger models will naturally lead to AGI.
Influence of Optimists: Thought leaders set a trend of short AGI timelines.
A Loose Definition of AGI: Broad interpretation leads to varied AGI expectations.
Competitive Pressure: Fear of being left behind in the AI race.
Public Milestones: High-profile AI achievements fuel AGI anticipation.
Optimism Bias: Tech enthusiasm creates an overly positive outlook.
Underestimating Complexity: Misinterpretation of AGI as a linear progression.
Strategic Projection: Claiming AGI is near to position them as a tech leader.
youtube
2025-06-13T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzpzMAUs-JQDivBEe14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwJbF48QsoyPnOjYat4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwj4q0E87Yw3-l25g94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxP8LhpYKUJBojd56Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugykh0fjR93gBeAO4JV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyjkxGp9Zq4mPVtUnt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8Lg7ydOJ_cF26Ed54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyED8igJrMrxvLDp0N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwvLvPGZaGjL0HmeHJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPu-36zw8qd75FbvR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]