Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you for your comment! On the AITube channel, we focus on discussing artifi…
ytr_Ugz_Q4jr8…
G
The irony when these women think they've landed their perfect ai companion yet …
ytc_UgyRM_NWn…
G
demon is there, AI is gonna do his work his voluntary. Look up the AI robots, th…
ytr_Ugy4sfm3d…
G
Deep fake is deep trouble but why government allowing this so called ai technolo…
ytc_UgyJul70h…
G
you may happen to hate the actual real reason they choose to do that, regardless…
ytr_UgxKwhCa3…
G
@akhilsharma2712 None of the systems are perfect, but that's this needs to be b…
ytr_UgxFNyUFZ…
G
It is not what AI is going to do to humanity, it is what humans will do to other…
ytc_Ugw7QA4DR…
G
If you use AI for 'accessibility', you're just insecure about your drawing skill…
ytc_Ugzo0PPo5…
Comment
Geoffrey Hinton is right about one thing: AI is accelerating faster than governments or corporations can regulate. Where I disagree is in the idea that it’s unstoppable chaos. We can safeguard against AGI and even ASI—if we build the right system.
That’s what I’ve been working on: OmniGuard, a complete framework designed to keep superintelligent systems in check. It combines:
- Omni Theory (the cosmic invariant that survival of intelligence is tied to the survival of life itself),
- GlobeTrotter (a global economic substrate that controls compute, energy, and capital access),
- OmniRepublic with Smarter Contracts (incorruptible, citizen-driven governance and justice), and
- UBIJ (Universal Basic Income & Jobs) to ensure that when AI automates most employment, people still receive guaranteed income and access to meaningful, socially valuable work from a self-financing system.
Together, this creates a lattice of guardrails that makes it irrational and impossible for an AGI to go rogue and ensures society remains stable when traditional jobs disappear.
People shouldn’t be paralyzed by fear. The real answer isn’t to halt progress—it’s to embed structural safeguards at the deepest levels of our economic, political, and technological systems. That’s the work I’m doing, and it’s far from impossible.
So yes, the risks are real. But no, we’re not helpless.
youtube
Cross-Cultural
2025-09-29T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw64nUC8M-4tQHKwsp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfseQzYfex37930Cp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz-ZNzKYCzXFivrMJN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyaRxjRrvU5_NJYyNp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyl6ESSaK1tw_cnPXB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxRmSLgg4XaU_PPAhV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyCYF-_QkbaPy4UQNF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzDaLVTLI1Ol5OlJLB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxGyo450IlQjjNYt8N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxDptl0T15AvkXQ1Tp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]