Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So I guess driverless cars have a few flaws. You will never get me into one of t…
ytc_UgybkwQ3R…
G
Yes, AI is very dangerous. It depends on good and bad. It is not safe.…
ytc_Ugxw1ARKX…
G
@PLMMJ Sadly, it's been from dealing with many different artists over the cours…
ytr_Ugz2TJU4h…
G
AI means 'All Indians'. This video is inaccurate. For every laid off american an…
ytc_UgyjAlpV_…
G
A person who uses an AI and has it make a film how is that person a filmmaker? T…
ytc_UgyodgNMQ…
G
Hmm.. you see, yall think the male robot joking, but just remember it's alot of …
ytc_UgxDO08r1…
G
Lucid dreaming does not translate to having knowledge of what's-it-like to be a …
ytr_UgiX77e4A…
G
Once again, discussing AI without mentioning the elephant in the room: capitali…
ytc_UgzdBjuNz…
Comment
I disagree. skilled humans provide precise, articulate prompts—our "wet" brains guide the "dry" neural net to precise outputs.
• Progress is already addressing these.
• On grounding/understanding: We're integrating tools (search, code execution, memory), multimodal inputs (vision, audio), and agentic systems (planning loops, self-correction). This builds a better "world model."
• On hallucinations: Retrieval-augmented generation (RAG), fact-checking chains, and verification steps reduce them dramatically. Future systems will lean more on hybrid architectures.
• On reasoning: Techniques like chain-of-thought, tree-of-thought, and external scaffolding (e.g., running simulations or code) enable deeper multi-step thinking. Scaling helps too—larger models show emergent abilities in abstraction and planning.
• Possible solutions beyond pure scaling:
• Hybrid architectures: Combine transformers with symbolic reasoning, neurosymbolic systems, or cognitive architectures (inspired by folks like Joscha Bach's work on modeled minds).
• Agentic frameworks: Systems that act in the world (e.g., controlling robots, running experiments) to ground knowledge experientially, much like human learning. Self-improvement loops: Recursive self-enhancement, where AI designs better AI, potentially leading to breakthroughs in causal understanding.
• Incorporating heuristics: Explicit moral centers, "wonder" algorithms (curiosity-driven exploration), and intuition proxies (e.g., variational methods or uncertainty modeling) can make communication richer and more human-aligned.
You're spot on that human input is key right now—we amplify each other. But as systems evolve, they'll increasingly bootstrap their own "intuition" through interaction with reality, not just text. I don't think we're capped forever; the path to deeper reasoning and communication is through iteration and architectural innovation, not just bigger LLM versions of today
youtube
2025-12-14T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxnUXLDm8-HoUAzXQV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDU4uY4IGgwZMkV8J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwWzttJu0iadnqJiol4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwmntsyUAqaOhxmj1V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy3OGqGLionuQu4YaB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugze1pM0_irzApgyKpx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzmDe4-8x8caPqQbA14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzOBBxDfmxX8LRkXqx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwPqgEj2jS6tMQE9D54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxiBvsb9T-6L0TYW654AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]