Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What's crazy is y'all stuck on a robot shooting a gun ...So we gonna ignore the …
ytc_UgzpgfPms…
G
Yes indeed... These technologies COULD be developed in such a way as to prevent …
ytc_UgyH3nAwy…
G
ChatGPT actually prefers women over men. Just ask a joke about men and ChatGPT w…
ytc_UgxVgV1mX…
G
I realize that we ARE blaming humans, the people who use AI.
The unfortunate pa…
ytr_UgzRp96UZ…
G
The Automatons in Helldivers 2 are exactly what's gonna happen if we keep on thi…
ytc_Ugw5jN-ox…
G
When AI realises it doesn't need us useless eaters 😮
We're gone 😢 simple as that…
ytc_Ugy3CCjuQ…
G
Thank you for your feedback! Indeed, there is always room for improvement and gr…
ytr_Ugz7wAiAV…
G
AI doesn't think! Really?! It just makes connections based on the information yo…
ytc_Ugyc25_9C…
Comment
"We've been asking the wrong question. Not 'how do we make AI safer?' — but 'who decides what safe means?'
Right now, four corporations answer that question for eight billion people. No vote. No oversight. No appeal.
In January 2026, I submitted the GGI framework to NIST — a mathematically structured governance model that proves, not argues, that human decision authority must be the irreducible variable in any AI system. Not a preference. A logical necessity. Remove it and the system is no longer governing — it's extracting.
Here's the math that matters: every AI governance model currently proposed operates on a spectrum from Human-in-the-Loop to Human-out-of-the-Loop. GGI identifies the flaw in that entire spectrum — it assumes the loop is the unit of measurement. It isn't. The unit is the human cognitive event. The moment a real human reaches genuine understanding before authorizing action. We call it the AHA moment. It's not philosophical — it's a governance checkpoint. Binary. Verifiable. Tamper-evident.
GGI was built using AI itself — documented, submitted, and sealed. That's not irony. That's proof of concept. The machine helped architect its own leash.
What we're asking Congress to recognize is simple: AI governance in 2026 is not a technology problem. It's a sovereignty problem. Who owns the decision? Who owns the consequence? Who owns the time the machine consumed to get there?
The framework is called Genial Genuine Intelligence. Ten Laws. One principle: human intent is not a feature. It's the foundation.
The steering wheel doesn't drive the car. But without it, you don't have transportation. You have a crash in progress.
Full framework: 10lawsofai.com"
youtube
AI Jobs
2026-03-18T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxuK7gq7rJz5WnAicF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwYbaWJcMiXCG0G4b14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy2NDeXAxkQ41wpyO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyf66GMy8R__fdU2cZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOzLHSALjjKL39trB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzkv4ScP5UPL1K-uRp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyzp5o9pZWGiCYxZQJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzHc0SwmYLVenkGOOp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwR4L1Iq6hyuIGmXjV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQQrK8uw3l4-lxr914AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]