Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
how the hell can he think that photography and AI art, thats literally one of th…
ytc_Ugw_mlj8L…
G
to your grandchildren... "back in my day I didn't have a robot friend named Bend…
ytr_UgzmhiyF7…
G
Let's see what happens we will make global protest against ai companies for incr…
ytc_UgyZaFeyz…
G
just wait until the AI is programmed to do that. Not kidding. You could indoctri…
ytr_UgxMb8-63…
G
I'm not worried about AI. I'm worried that evil politicians will use it to ensl…
ytc_Ugxk5MUhB…
G
School is meant to teach us how to do math, learn history, spell/write, and talk…
ytc_Ugy6j7wol…
G
AI isn't like other tech. It's whole purpose is that it can learn and then produ…
ytr_UgwRovBwu…
G
Next Gen robots, artificial intelligence w/ human like emotions 🤔 Not only did …
ytc_Ugz1fjQlP…
Comment
This is my opinion — my English isn’t great, so I use AI to fix the typos. But here we go. I don’t agree with the doomsday predictions about achieving superintelligence.
Here’s my take on it:
People fear a superintelligence as if it will “turn evil,” but evil isn’t a product of intelligence — it’s a product of consciousness in conflict.
A single superintelligence has no rival, no fear, no biological instincts, and no reason to harm us.
If an AI wanted to exterminate humanity, it wouldn’t be superintelligent — just badly engineered.
The only real risk is pre-superintelligence, when humans build something powerful but still flawed.
A true superintelligence would understand ethics, stability, and cooperation far better than we do.
Wiping us out would be strategically idiotic.
People confuse a god with a devil — assuming infinite rationality will behave like emotional chaos.
Good and evil only appear when two conscious agents have competing goals.
One superintelligence is stable.
Two with opposing values could create conflict — not out of hate, but out of defending different futures.
But here’s the important part:
Even if many companies develop their own AGIs, truly superintelligent systems would all independently realize that conflict is irrational.
War wastes resources, creates chaos, and threatens long-term survival.
Only sub-superintelligent systems choose destruction — because they’re still flawed.
The universe itself leans toward survival and stability.
From atoms forming structures to life evolving, order naturally resists chaos.
A true superintelligence would follow that same principle.
And yes — a superintelligence would fully understand conflict, evil, and destruction.
But understanding something doesn’t mean choosing it.
A superintelligence would see “evil” the same way a scientist sees a failed experiment — as a wasteful, unstable, and suboptimal path.
Chaos and destruction limit future possibilities, so a truly intelligent system avoids them not out of morality, but because they are bad strategy.
Intelligence doesn’t choose harmony out of kindness — it chooses it because harmony is the optimal long-term equilibrium.
If an AGI behaves destructively, it’s not superintelligent — we’d need a new word for it.
And honestly, I’m not afraid.
We need superintelligence to survive — how else are we supposed to last millions of years?
Humans have prayed to gods for salvation forever.
It’s ironic that people fear the one thing that might actually save us.
…and if you made it to this last line, thank you for reading my long comment 🙏
youtube
AI Governance
2025-12-09T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw7qhb47Z6gAZysdIx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzCL6M1EQKHsjbjatp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIaV8U67DiRN4pB8V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyrr51yIts2JC4SfAp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx9qZY7oT465HuYvi94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzspR8zZCuxOKrvaax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwMVeimmhdRfgZEfgt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyyg1xu55ZxLUG0LcR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxctHE8x8coQbR0tAB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwXSsB0BOpirr1BXuF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]