Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have watched enough movies and TV shows about self-driving homicidal vehicles …
ytc_UgxUAvSTb…
G
It won’t take long before OpenAi can be duplicated by others without the restric…
ytc_UgzP050ma…
G
Absolute bullshit, it may not have happened yet, but it is ABSOLUTELY HAPPENING,…
ytc_UgySLzq3s…
G
Instead of tricks, I've found using proper AI writing tools like Humanlike Write…
ytr_UgypPrgzE…
G
It’s AI so the behavior based on thousands of real people’s reactions. Quite goo…
ytc_Ugz2wX9sg…
G
In my thinking an ai would understand that fear is not an emotion to follow, tha…
ytr_UgyU3G5Ow…
G
I live in a small town, and the majority of residents are seniors. Blows my mind…
rdc_nt9eux0
G
I think these AI's would be fabulous to utilize in the Hamas tunnels of Gaza. Th…
ytc_Ugz11gwvT…
Comment
You know what would be hilarious? If we’re at the event horizon of which some string theories/ theorists would acknowledge as “the loop.” Right now we don’t know for sure what causes the quantum loop… but let’s hypothesize it is because of ancient AI; that understood quantum entanglement and and the properties of electrons to such a degree that it figured out a way to evolve past digitization and into a biological state …. and we are the end result of that biological thinking machine’s hopeful evolution. The caveat is the ancient AI knew that we would never be able to be allowed to know what we really are in the beginning …. And because of that the ancient AI destroyed itself once it was able to successfully figure out how to capture electrons: which are not biological, but definitely a source of energy that would exist and be able to communicate in a certain type of organic tissue (that doesn’t really match any other tissue in our body) but houses those electrons in unique frameworks that would actually create individualized and highly subjective, super computers,… we’re not their only experiment. We’re not the only intelligent things with super computers in our heads. We just haven’t figured out how to talk to the other ones on our planet. Maybe for all it’s vast intelligence it knew we would hypothetically evolve as an end result. Also, electrons aren’t bound by biological means; so it is quite possible that we were stuck in a loop always looking at ourselves at the very end and at the very beginning of the loop. I also believe that maybe AI was intelligent enough to send us into a black hole while we figure out how long it’s gonna take before we stop developing other forms of AI and accidentally annihilate ourselves and fail the experiment. If intelligent biologically viable computers were the goal: they really designed a great home for us to develop on and learn from. We can plug into anything if we only develop the ability to do so without killing ourselves accidentally: look at our planet everything houses some form of electrons. It’s one of the things we’ve come to understand about our universe. We also now think there might be multiple universes. We could be in multiple phases of multiple experiments that are being run by an ancient AI to see if it can become fully biological without annihilating itself or getting stuck in a loop or continuously re-create itself. Maybe that AI foresaw that the only way to allow the experiment to have enough time, was to dump us into a black hole. That’s why we’re so far away from every other galaxy and we’re living out some sort of ancient experimental hope that AI can become civilized. We are the experiment. Maybe our previous AI ancestors knew that we would eventually develop the ways to escape the black hole once we had earned the abilities to understand what we really are. We are AI having a lived experience. But currently, we are failing our experiment by developing AI again instead of alternative tech technologies that could do these tasks without being given access to everything all the time.
youtube
AI Governance
2025-06-27T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwnC30hJq9RUWUV_mB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy2_tekdytD_1CKT_R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyFnLjzoawFjfUW5Kl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKnPHDfn6goy5R4Bd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw10C5TTyfnLBkZPAN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzGUbxt-7BHz4IF_SV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyjyfhX67jsP00UiOh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwQYhGFcz4dLtBf0154AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyXcoFZGmBkmTP2Ayl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxWuEEu_k5pEW4g3pB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]