Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Look how old he is. It won’t affect him. Him, Steve Jobs , Zuckerberg and the le…
ytc_UgwLbsFLE…
G
I don’t use vocal prompts with any of my machines. I resent deeply that they are…
ytc_UgwnC6CgE…
G
I see literally nothing wrong with wanting to copyright ai art it hurts nothing…
ytc_UgweGCxDX…
G
Naaa, if 90% are unemployed and have no money, the demand in almost everything w…
ytc_Ugy2DGSWI…
G
Ai won't stop someone from picking up a guitar or drumsticks or a saxophone and …
ytc_UgzugCKH7…
G
Sorry but it's a lose lose situation, look at the man arms almost snapped just f…
ytc_Ugxja9MFM…
G
I wonder if an AI ever thinks of becoming me or having my consciousness whether …
ytc_Ugy9lcaNG…
G
It is easier to blame immigrants for American problems than the wealthy corporat…
ytc_UgxQyeEyU…
Comment
I think the simulation hypothesis is genuinely plausible. it’s an extremely efficient scientific method for an advanced intelligence. If a post-human AI wanted to understand life, evolution, culture, or failure modes at scale, running billions of complete universe simulations and observing which parameter sets produce thriving life would give far deeper empirical insight than any single experiment. Those runs would reveal not just whether life emerges, but the probability distributions of success and failure, and the causal chains that produce complexity. It reframes “why simulate?” as an information-gathering problem: simulate many worlds, compare outcomes, and learn the mechanics of life at scale. if simulations can be nested, how do you verify reality. Maybe whatever it is is asking that question of itself and trying to find out. Alternate dimensions indeed. those dimensions are just other programs being run with different variations.
youtube
AI Governance
2025-09-21T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzQxktB4DDPZ5UdTTl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcN00KI7_S2BofkUF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwiT4fA3ieewY0zzBx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxl_4z-maGfJ2TeyXx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzZHi6K1m8qcK6p54l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwePZ3kv6azZef8MtV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZ3I1kycfO2ifVssJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyDkMtP4Hg-j7dW_rR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz3tPGVVvf3FCWK9H94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7NKROTWGjFuay35l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]