Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I want to make a counterpoint WRT "AI can't create anything new" as to what I fi…
ytc_Ugz0TjymD…
G
"Most of the stuff I vent about are regular stressors and grief, and those topic…
ytr_UgxNDqj1F…
G
The rethoric of Vance is just stupid, either way you ARE the product, either to …
ytc_UgwwpfTy1…
G
I think people recycling the phrase, "...but you could have paid an actual artis…
ytc_UgwmZHCzw…
G
What he is explaining is a soul. A.I. doesn’t have a soul. It can never develop …
ytc_Ugy3I4NFq…
G
I'm gonna have new job Robot destroyer !!!!
Might as well call me the Terminato…
ytc_UgzL86oWs…
G
I'm an artist myself, and I've noticed a major difference in how AI versus human…
ytc_UgySrBD8a…
G
I asked jhaat bar gpt will you destroy humanity it said chances are slim but in …
ytc_Ugy2Cn1tP…
Comment
The simulation thing is something I remain skeptical of.
First off, it's not a theory. There is no "simulation theory." You can't test it. It's not even a hypothesis. It's an unscientific belief on the same level as believing in God or the Tooth Fairy, except perhaps less useful.
Secondly, I find the initial syllogism specious:
1) if I had the capability to run a simulation of this interview a billion times, I would.
2) therefore, the odds of this being a real interview is 1 in a billion.
The big problem here is you assume your conclusion is correct at the outset--that we're living in a simulation because it's the most likely probability. And on the basis of that assumption, you draw the conclusion that your assumption is correct--again that we're living in a simulation because it's the most likely probability. This is a tautology, bruh. It's a circular argument.
Before you can make any assertions that we're living in a simulation, you have to first prove that consciousness is something that can be recreated artificially. But here's the issue with that: while we may be able to mechanistically replicate some operational outputs of cognition (i.e. a simulacra of consciousness), we can't actually replicate cognition itself. We don't even know how cognition (i.e. self reflective awareness, subjective experience, or sentience) arises in the mind to begin with. There's no scientific model for where consciousness comes from in the first place, much less how to reproduce it.
So that's a problem you have to address before proceeding here. And you're simply making the assumption that "oh well if we just keep adding compute, it will happen." That's an article of faith, sir. It's an occult belief at this point. On this point, you're on the same level as a witch doctor. A mystic. A shaman.
Another problem is thermodynamics itself. The idea that there's enough energy in the universe--or at least enough energy on our planet, or alternatively, that our planet is capable of absorbing that much energy output without literally overheating--to simulate a billion universes is a bit questionable. We don't even know if you can build a computer powerful enough to simulate one universe, much less a billion. And certainly, there's no such thing as a computer that can even approach it now. You'd need like a quantum computer or something to even dream about that being a real possibility.
And then another thing... this isn't testable at all. If we were living in a simulation, there would be no way to prove it. So this is a pure scifi belief. In which case, why are you indulging it as a serious idea? It has no practical bearing on your existence. It's fairy tale thinking. You're making up a story with religious overtones. Puts you on the same level as L Ron Hubbard or Philip K Dick, since it's a fantasy.
It's interesting to talk about AI but these discussions inevitably go down the rabbithole of Silicon Valley Mysticism, which we generally refer to as futurism. And all that is questionable to say the least.
youtube
AI Governance
2025-09-08T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyZMSTSwiVlaIi_TCp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgynUnUd6bMRiqsJL-14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgxxX0Nnd5jZsKT7bk54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgzucEc2uEoXs-EGNIp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugyzwk7E2ev5VLv-urh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_UgxVhJhbWUINJpAAix54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgydzgGhUd1NpjKg5rR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugxn76bxUFHvsFd5UM14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_Ugx2cj-UrKkXcZ2T2jt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzIRPg6ds4IcRlYisR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]