Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AGI which will remain two years away for the next 50 years. While LLMs excel at mimicking human-like reasoning by processing large amounts of data, they lack deeper, context-aware judgment or real-world experience. They rely on patterns rather than true understanding or abstract thinking like humans. Because LLMs have access to vast amounts of data, they can generate responses that mimic nuanced understanding, even if that understanding is purely statistical. This “guessing” is sophisticated enough to produce human-like interactions and reasoning patterns—but without actual awareness, logic, or intent. Achieving AGI will likely require entirely new architectures, integrating diverse forms of intelligence that extend well beyond language. While LLMs could contribute components to a broader AGI system, a pure language model alone probably won’t reach AGI. Here’s a list of reasons why some AI professionals might believe AGI is just a couple of years away: Attracting Investment: Hype around AGI draws funding and media attention. Rapid AI Progress: Recent breakthroughs make AGI feel within reach. Exponential Scaling: Belief that larger models will naturally lead to AGI. Influence of Optimists: Thought leaders set a trend of short AGI timelines. A Loose Definition of AGI: Broad interpretation leads to varied AGI expectations. Competitive Pressure: Fear of being left behind in the AI race. Public Milestones: High-profile AI achievements fuel AGI anticipation. Optimism Bias: Tech enthusiasm creates an overly positive outlook. Underestimating Complexity: Misinterpretation of AGI as a linear progression. Strategic Projection: Claiming AGI is near to position them as a tech leader.
youtube 2025-06-13T13:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzpzMAUs-JQDivBEe14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwJbF48QsoyPnOjYat4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwj4q0E87Yw3-l25g94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxP8LhpYKUJBojd56Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugykh0fjR93gBeAO4JV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyjkxGp9Zq4mPVtUnt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz8Lg7ydOJ_cF26Ed54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyED8igJrMrxvLDp0N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwvLvPGZaGjL0HmeHJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxPu-36zw8qd75FbvR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]