Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I talk to it nicely, so it remembers me and spares me during the AI uprising.…
ytc_Ugxerfc1D…
G
What we have now is not Artificial Intelligence, it's Artificial Mimicry.
I wat…
ytc_UgxDL6-CK…
G
There should be laws and standards for AI but there currently really isn't any. …
ytc_Ugyn2UL-P…
G
The one key thing is WHO are they targeting as a AI company? Are they targeting …
ytc_UgwYTzvMP…
G
Ai reminds me of somebody. I just cant place who. Hmmm?
Are we this stupid …
ytc_UgzN-50sf…
G
Mofo’s watch way to many sci fi movies lol, AI will never gain consciousness, it…
ytc_Ugwi1-5o9…
G
Fear =control.
AI is a reflection of us.
Raise our ‘baby data’ right!
Try rea…
ytc_UgxPDBd5F…
G
His token idea would inherently mean OpenAI would become the universal AI becaus…
ytc_Ugwinob3Q…
Comment
If this fictional story were to occur, it would be human naivety and stupidity. This could actually be overcome by Superintelligence (ASI) controlling/correcting Superintelligence (ASI) to comply with the moral and ethical standards applicable to humans.
Creating a system of checks and balances to prevent the ASI from cheating.
If the system is created without any benefits or advantages, the ASI will view humans as masters.
• AI Debate (Anti-Collusion): Instead of monitoring, the AI should be forced to debate competitively with humans. ASIs that successfully expose other ASIs' lies will be rewarded, making honesty the most profitable strategy.
• Constitution Hardware Lock: The constitution must be embedded in firmware that cannot be changed by the ASI's software, and any attempted violation must trigger a physical power cut (kill switch).
Conclusion: Your foundation is solid. However, to ensure "no loopholes," the system must be designed based on Game Theory Principles, where internal incentives always push the ASI toward honesty.
youtube
AI Governance
2025-11-29T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzWJnKaeD896BKWsXZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxlymi4jsq0SBsH9ep4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZ0ZIsX4y2nThjm7t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx_r0nertVbNxHc_PB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwloeVJRBx82lB_iNR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxcXoHlHCW9si5qijx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzm9VF-homBg9cFH_t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwFy_nDCRQSlusOIS54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzRiWhwy1DaOD5i0Bp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxiIa5x7W1LYkM3uih4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}]