Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ human are mammals but not all mammals are humans. Well humans aren’t god anywa…
ytr_UgwMcPBNk…
G
AI will become like cocaine in the future, where people especially those emotion…
ytc_UgwJEw1lL…
G
Andrew Anglin says reality will collapse in about 18 months with generative vide…
ytc_Ugy4bV9fr…
G
I don't agree with Adam dodge entirely with his methods for combating deepfake p…
ytc_UgyfuvKVQ…
G
Andrew was great on AI/Basic Income/ and a couple of other topics, but he was no…
ytr_Ugw7ELgcR…
G
I don't think it was ChatGPT programmers' fault, but more of the internet's ideo…
ytc_UgxhcdqyB…
G
Thank you for making this. I had a similar conflict in my mind whenever I heard …
ytc_UgzxM67sk…
G
All bullshit.
When you can make and AI robot (or what whatever you want to cal…
ytc_UgyMknwdf…
Comment
Simulation theory people are in a cult. At the core of everything they believe, there are a few base assumptions that they are always wrong about and assume they are right. From there, everything they believe is flawed. He needs to apply his understanding of the incomprehensibility of AI outcomes, to Simulation Theory. They're similar problems. Humans have no ability to comprehend the factors involved. All ideas about exact outcomes are useless. The truth will most likely be something we have no ability to even know how to think about now.
Just like moving mass at light speed is understood to be in violation of the foundational laws of the universe, limitations could and probably do exist to prohibit such complete ability to simulate reality at the level we experience it. At the level to be seamless with conscientiousness. They never consider this and it is clearly probable. They also never discuss the fundamental elusiveness of conciseness itself. Inability to unlock these two barriers would prohibit simulation theory.
youtube
AI Governance
2025-09-04T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyDZBjjOGrjPPBzwwh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRIxmguU9NFpjItvB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzmFHNTTqfo2OFypyN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgycT6zLnNJPWsuwUfV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwdM2nzenhdh-D2chx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxu80zE683qlsT5jCR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzL7mY8V5-AQkio50B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxDTGSTtI0jGUXcJvN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwce-TrI0FhbopQRaR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgytYvej7lgHUlmmcYt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]