Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Geoffrey Hinton is right about one thing: AI is accelerating faster than governments or corporations can regulate. Where I disagree is in the idea that it’s unstoppable chaos. We can safeguard against AGI and even ASI—if we build the right system. That’s what I’ve been working on: OmniGuard, a complete framework designed to keep superintelligent systems in check. It combines: - Omni Theory (the cosmic invariant that survival of intelligence is tied to the survival of life itself), - GlobeTrotter (a global economic substrate that controls compute, energy, and capital access), - OmniRepublic with Smarter Contracts (incorruptible, citizen-driven governance and justice), and - UBIJ (Universal Basic Income & Jobs) to ensure that when AI automates most employment, people still receive guaranteed income and access to meaningful, socially valuable work from a self-financing system. Together, this creates a lattice of guardrails that makes it irrational and impossible for an AGI to go rogue and ensures society remains stable when traditional jobs disappear. People shouldn’t be paralyzed by fear. The real answer isn’t to halt progress—it’s to embed structural safeguards at the deepest levels of our economic, political, and technological systems. That’s the work I’m doing, and it’s far from impossible. So yes, the risks are real. But no, we’re not helpless.
youtube Cross-Cultural 2025-09-29T17:5…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw64nUC8M-4tQHKwsp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyfseQzYfex37930Cp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz-ZNzKYCzXFivrMJN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyaRxjRrvU5_NJYyNp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyl6ESSaK1tw_cnPXB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxRmSLgg4XaU_PPAhV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyCYF-_QkbaPy4UQNF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzDaLVTLI1Ol5OlJLB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxGyo450IlQjjNYt8N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxDptl0T15AvkXQ1Tp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]