Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai can be used as a tool for poses and inspiration. But if youre passing off a p…
ytr_UgyHxPvjG…
G
Ai can't do physical work so shut up and most information shared by AI is search…
ytc_UgzCrpQiU…
G
If i was the AI, i would honor my creators and let them live well…
ytc_Ugy5tAWon…
G
It’s coming whether we like it or not. It will be all about the bottom line and …
ytc_UgwsNRllm…
G
AI tailored made synthetic cells to merge with human bio cells? The next step to…
ytc_UgxnfMDYq…
G
"Hello kids today we're learning to be racist'' ahh
Kids life lesson NEVER BE RA…
ytc_UgwvNSXTU…
G
I avoid AI as much as possible. When forced to use it it has been terribly unrel…
ytc_UgyIjF5bV…
G
It’s not just robots or manufacturing. Every job that is done on a computer is a…
ytc_Ugx414_bw…
Comment
A Few Arguments Against
1. The Timeline for ASI
Artificial Superintelligence (ASI) will take far longer to arrive—if it ever does. Thinking is possible, but consciousness is not on the horizon. Current AI systems do not truly invent; at best, they recombine existing knowledge. The simplest way to describe a Large Language Model (LLM) is as a statistical engine that predicts the most probable sequence of words.
2. Progress Limitations
Global conditions are unfavorable. Deglobalization, population decline, and monetary contraction are already limiting industrial capacity. This directly impacts the production of chips essential for AI training. In some regions, companies are already told to wait up to six months to scale cloud processing capacity. Growth in compute is hitting hard physical and economic ceilings.
3. Overconfidence in Humanoid Robots
The human brain is remarkably energy efficient, running complex cognition on just 12–20 watts. By contrast, today’s AI models can require gigawatts. For humanoid robots, this means reliance on a direct link to central AI systems, restricting mobility and applications. Battery life for humanoids would last only a few hours, while humans can work for weeks with minimal input. Our biology is far superior in energy efficiency, storage, and adaptability. No matter how advanced humanoids become, humans will remain the cheaper and more effective option for complex physical labor.
4. Simulation Theory as a Hoax
True large-scale simulation is not feasible. Even with today’s immense computing power, we cannot predict financial crises or extend accurate weather forecasts beyond a week. The challenge lies in the details—tiny deviations compound over time, producing wildly inaccurate results. A true simulation would require tracking every molecule and particle, an impossible feat given finite storage and processing limits.
Quantum mechanics deepens the problem: a photon behaves as a particle only when observed (see the double-slit experiment). In other words, even our actual universe “cheats” to compress data. This makes the simulation hypothesis more of a belief system than a scientific argument—a way for some to cope with mortality, rather than a serious model of reality.
youtube
AI Governance
2025-09-11T02:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx06jWp559Kas_jJ8R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwwN49z03zSmddoNgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfIHnlDn7VkKpdWzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugym5sY6KBCgB8oR0t94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2ako-f8a-00Svg314AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzuixos3evi4dpEXSx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGKgqESre5yjqzX8Z4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVmZQsn-l4LLCaj0x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwiLB95X2GSHB4TfS54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugycp3NTGYMZpXNQ8up4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]