Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Google primary AI compute is on their own TPU, so the comparison of what google …
ytc_UgzSVoK2Q…
G
Machine Learning at Amazon Distribution Center: Okey!! Amazon is a distribution …
ytc_UgzK7wGh1…
G
What about the data centres and water conservation issue? Essentially AI is a ne…
ytc_UgzHi4c65…
G
I’m in trouble with my CoPilot in this instance… The other day, it confessed its…
ytc_UgxVTZ_jG…
G
I'm completely for poisoning your work to fuck over scrapers but I'm not against…
ytc_UgynQGBmT…
G
I am a frontend engineer with 20 years of experience. Centering a div is not eas…
rdc_l4gbb00
G
Everyone hates AI until they need to pay hundreds of dollars to an artists. Like…
ytc_UgxIeq7Tl…
G
Gee, definitely a billion dollar company that has high revenue, to think that th…
ytc_Ugxr-LOxz…
Comment
Hypothetical Discovery Timeline with Years + Spinoffs
2030–2035 → Riemann Hypothesis solved (Mathematics)
* Spinoffs: new encryption methods, quantum-safe security, links to quantum chaos.
* 🔥 Impact: global scramble in cybersecurity, finance, and communications.
2035–2040 → Navier–Stokes equations cracked (Math/Physics)
* Spinoffs: turbulence tamed → perfect flight, weather models, fusion energy stabilized.
* 🔥 Impact: cheap clean energy, weather engineering, geopolitics shift.
2040–2045 → Protein folding fully solved (Math/Biology)
* Spinoffs: nanomedicine, programmable organisms, ecological modeling.
* 🔥 Impact: biotech revolution, blurred lines between “natural” and “artificial life.”
2045–2050 → P vs NP resolved (Computer Science)
* Spinoffs: exponential acceleration (if P=NP), or sharp limits clarified (if P≠NP).
* 🔥 Impact: massive changes to AI, optimization, supply chains, and cryptography.
2050–2060 → Self-assembling nanotech realized (Engineering)
* Spinoffs: atomically precise manufacturing, nanobots in medicine, adaptive infrastructure.
* 🔥 Impact: collapse of resource scarcity, societal shock in economics.
2060–2070 → Yang–Mills mass gap solved (Physics/Math)
* Spinoffs: new states of matter, refined quantum field theory, nuclear-level control.
* 🔥 Impact: possible exotic tech + new energy/weapon risks.
2070–2080 → First AGI “Mathematical Oracle” emerges (AI/Math)
* Spinoffs: AI invents new math branches, proofs outpace human comprehension.
* 🔥 Impact: humans no longer the apex of reasoning → existential reorientation.
2080–2090 → PDEs & Geometry unify relativity + quantum (Math/Physics)
* Spinoffs: quantum gravity, exotic propulsion, cosmic-scale physics.
* 🔥 Impact: humanity gains tools for interstellar travel + deeper cosmic awareness.
2090–2100 → Human–machine symbiosis stabilized (Bio/AI/Nanotech)
* Spinoffs: neural linking, collective cognition, cognitive augmentation.
* 🔥 Impact: individuality dissolves → culture and identity undergo total reinvention.
2100+ → New meaning systems emerge (Philosophy/Culture)
* Spinoffs: mathematical spirituality, post-scarcity ethics, cosmic consciousness.
* 🔥 Impact: humanity redefines itself as a civilization woven with math, biology, and AI.
⚡ Notice how the intervals shrink:
* Early math/physics problems (2030–2050) take ~decades each.
* Later AGI-assisted breakthroughs (2070+) happen within years.
* By 2100, change is continuous and self-reinforcing.
youtube
AI Governance
2025-08-29T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxotr1u081phvNuWXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhB0_K1FjOuhE0vKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyNA-SEoxyog_FNKRh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx54ElM-brApl_SqBN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxgpYVnSaUbITmh14p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz974xYQ574haMOp_94AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyfLDe-p_hZBs9HSfF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmKLWKpssm3lk0uER4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy4URpZKmtcsLjoVnF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx4KWJRc4bjM_OHSgh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]