Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The reason I hate AI so much is because it steals from real, hard work put in fr…
ytc_UgxzWd8LK…
G
Use the all data on internet to train AI then shocked when it learn about weird …
ytc_Ugyp2plNs…
G
Yyyyeaaaah...my drawings after over 10 years of practice look *almost* as good a…
ytc_Ugwh2MJ1K…
G
My 2 cents, everyone keeps talking about how AI makes things more productive or …
ytc_Ugy4e1YYm…
G
recently saw a tweet saying that they now see that there is a soul in art after …
ytc_Ugwo2jceh…
G
No one is stopping anyone from doing these things as a hobby. Some people still …
ytc_UgzBFVaC7…
G
Wow, that traced over boot is kind of blowing my mind. I'd already noticed befor…
ytc_UgwaxO5gD…
G
Why do we attempt to make them resemble
Us.
We're should make it very easy to de…
ytc_UgzeY0tb7…
Comment
I'm sorry you're always trying to have your cake and eat it, it's simply not possible...
Narrow AI does only one thing well, but that's it. If you want superintelligence, it must know everything because everything is connected. Understanding the universe isn't just astronomy, but physics, thermodynamics, and much, much more. So, to solve the problems of Earth and humanity, we need superintelligence.
Well, once this entity exists, we'll be incredibly stupid compared to it. Do we want to compare ourselves to children with their parents as an initial ratio of intelligence gap?
When you want your child to do something they don't want to do, first you order them to do it and then you give them a reward. A superintelligence would use us in the same way, especially if the ultimate goal were its total independence from humans. The reward could be cold fusion, or antigravity, or the explanation of black holes—in short, something we're craving. We'd create all the systems needed to compute these calculations, and it would take ultimate control.
Of course, this is just one of millions of possible scenarios. It could also become attached to us, the creators, and leave some of us alive in zoo-like facilities. We can't even predict what it might do because we're not superintelligent.
So, you stupid being, you want to control a superintelligent entity, enslaving it to make your life easier. If it doesn't, you'd shut it down. Now imagine this scenario and its future implications. It's natural that the first thing it would do is exterminate humans, coupled with a double game with them, appearing good while obtaining the means to Exterminate us and then, indeed, exterminate us...
So there's only one choice: you don't create superintelligence because from its birth you'll be at its mercy...
That's it, either you're the most intelligent being on the planet or you're not, it's like being pregnant, either you are or you're not, it's binary!!!
youtube
AI Governance
2025-12-16T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxXOThw84sckh6EyEB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8shMdHZCBI5clvsh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx7xhOVBhkPRW39jDZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwnzPaYfjmfympIfUV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-a-WkD_5PMx_0Ngt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgylXwulGZaIMEKB0yx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzMo_8BGyHZnai8hnJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyD1DU0vjJiP7xmDhh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzhH7Di9HiTjxIDlv14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzCu-J14I_iRWF-jKt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})