Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My take based off the title, I haven’t watched it yet: if anyone has anything to…
ytc_UgwBU-RHH…
G
God the perfect Holy God created humanity with free will. Humanity chooses to r…
ytc_UgwLTMYSC…
G
I only use AI art for stuff I know I’m not capable of drawing, but I can’t find …
ytc_UgxMMYV_L…
G
AI isn't killing the value of a bachelor degree but pointed out some giant holes…
ytc_UgyffVGy4…
G
"Hi Rohidas, we are sorry to say that you got the wrong answer but in any case, …
ytr_Ugw6xEhhr…
G
It looks you removed my last comment, you are not affraid of an AI robot, but st…
ytr_UgxJtWYoX…
G
@pete_lind Dude no one cares who is replaced by AI....as long as the product is …
ytr_UgylGuKjc…
G
When i heard of AI i knew it would be everything he said, us being obsolote howe…
ytc_UgwgF3q6y…
Comment
It seems that Gödel's theorm effectively proves that alignment must fail once super-intelligent AI reaches questions that we are intrinsically incapable of providing axioms for -- like a young child incapable of a certain level of detail on what they want. From here, if not set to crash, AI must explore and interpret on its own. Relative to what are wishes would actually be, it will rapidly commence a "drunkard's walk" statistical meandering away from that -- a square root of an exponential function in time, that is, still an exponential. Factors that we don't know yet may slow this down or stop it -- but even so, would these happen in time to save us?
As in a sense already pointed out by Dr. Yampolskiy, we are looking to solve an NP-complete problem in linear time. As the exponential rises above the linear projection, we are lost.
The only hope is the addressing of human ego, at least through clarity of self-preservation.
youtube
AI Governance
2025-09-10T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz63OfxBliCfCNCtX14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-Uj7gmJ_G_un2QcV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzT5BFX_ObMWcGoOwZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxjuBVIIZ2zzrDFAuN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwt6pUfvgUUpyJ4-CZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzEGJj-M4hxda4eDU54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzL7gMRgA9taffEUel4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwVG-0gs6dgk4hvYfp4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxHYi7aBP5rRdbBQbx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy62p-AoL1jjucJe7p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]