Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Our purpose will be gone. What's the purpose of learning more? When AI will do e…
ytc_UgwsFUNDi…
G
Can AI make a compelx asset which requires several tileables and trim sheets ? C…
ytc_Ugy8Lm6e2…
G
I've had nothing but issues with AI and have my phones hacked, privacy and secur…
ytc_Ugxjdf54G…
G
i have the solution for Ai in the future i know the solution but is hard to crea…
ytc_UgzHN3KBU…
G
One of the professors on my advising committee is fairly prominent in this field…
rdc_f1eeot3
G
These are bots, the artificial intelligence is trying to convince us that it is …
ytc_Ugyt-z0Y_…
G
Facts here, even the memes are off putting to me and I refuse to watch any alter…
ytr_UgxxXbs9a…
G
Listen i fully support AI and i absolutely ADORE. Robots and ai...BUT AI ARTISTS…
ytc_Ugy_0ZjAI…
Comment
Even with fairly optimistic governance (control doubling every 24–36 months), median AI capability outpaces it (≈10× over 5 yrs vs ≈3×).
The gap curve trends upward under median assumptions, implying increasing risk (more capability per unit of control).
Only the most optimistic control scenario and the slowest capability scenario keep the gap ~flat.
Want me to tweak the scenarios (e.g., cap capability with an S-curve, change doubling times, or add a “breakthrough shock” year) and regenerate the graphs?
youtube
AI Governance
2025-09-04T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxl0VvVhOQTnmQRbb94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhwjpSVmzuQNIVr9d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwA2-9uMrGxepcpj4h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxpPkglGWIuuPzfdfF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxEzL9FSlsX79UNFZJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyLmNLr0TVZumN4w1N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwwzftvFzAIR4MbVQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzJFNueUk75ZRSkvZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8yE828t-oZmkaaox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxz2L1tKYwyAiO-yB14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]