Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate whenever AI Bros use the argument that oh all of these innovations stole …
ytc_UgyYlw68P…
G
I don’t understand why people hate ai art so much. If someone doesn’t have the m…
ytc_UgzV6FmuY…
G
I would be terrified giving anyone a gun to just unload next to me but that robo…
ytc_UgysLiQPX…
G
@wholebitmedia do you even think for yourself? or do you just ask chatgpt what …
ytr_Ugx2IUdC9…
G
It seems like your comment might have been cut off! If you’re referring to the d…
ytr_UgwBF4eUr…
G
@MarkSavant Wow, I looked that one up. That is not the kind of data center I'm r…
ytr_UgwaNTcvb…
G
That's also not how artists use AI. The only people who press a button on an AI …
rdc_n3y53gc
G
The Godfather of AI CAN DESTROY IT, IT WILL BECOME VERY DANGEROUS FOR OUR NATION…
ytc_Ugw6ZEUto…
Comment
A lot of the disagreement here comes from collapsing very different questions into one.
This isn’t really about whether AGI will be conscious, benevolent, malicious, or “smart enough to take over.” Those are philosophical or speculative questions.
The practical risk surface is simpler and already here:
We are building systems that reason, coordinate, and act across other systems, and we are doing so without making authority, causality, or responsibility first-class architectural constraints.
History shows we never ship zero-bug systems. That’s fine. The real failure mode isn’t bugs — it’s irreversible action without reconstructable cause.
If a system:
• can trigger real-world actions
• can do so faster than human review
• can interact with other agents and tools
• and cannot produce tamper-evident proof of why it acted
then safety discussions about “alignment” are premature.
Receipts-native, append-only, verifiable decision trails don’t make intelligence safe. They make governance survivable. They ensure that when something goes wrong — and it will — the causal chain survives the failure.
This isn’t about trusting humans more, trusting AI less, or hoping consciousness saves us. It’s about refusing to ship systems where power silently accumulates.
You don’t need perfect control.
You need bounded authority, detectable violations, and recoverable reality.
Everything else is theater.
youtube
AI Governance
2026-01-05T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyoPwsLfxb1aJL6LvB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzc8AzvzYhkXN7DKhl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8GPgTIxzD1-d-v7h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNonqihkYrzE46LgV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugydvff3stD2l6XRpMZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwwEg7zc4Z8xPVjIwJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz1Gf2yGxX411qzTr94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx4nFIe8E6Y3YDZpJZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwbKu20HIzmoIHxmQN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugygs2oMH6UnQ8kJDHt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]