Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Waymo takes the lowest cost part of the operation out of the Uber...Uber drivers…
ytc_UgzFTcSWb…
G
Imagine AI in the USA it will create a lot of millionaires and enriching a preda…
ytc_UgwKGgmhT…
G
If you want reference images I will put before 2019 in the search and often that…
ytc_UgwD6fOoj…
G
The Chinese question seems odd to me. That 话 character is redundant. So I conclu…
ytc_UgyZ2VlPt…
G
I am a Bills fan. There is no way they will replace NFL players with AI generate…
ytc_Ugx51NbI_…
G
> but I do understand why many 3rd world countries refuse to take side.
The …
rdc_hhl30uj
G
You really think you can predict the future 2 years out in these times? So if a …
ytc_UgyBLSvEL…
G
AI will replace management faster than it replaces the worker bee. Middle and f…
ytc_UgwkGAZwr…
Comment
I think there is a fundamental technological limitation with AI that any true AGI would very very quickly run into. A human brain consists of neurons. Neurons are cells that can have dozens, or even hundreds, of inputs and outputs- connections, in other words. A single cell; one singular node. All thinking and neural functions consist of either a fixed, limited, and more or less automated response to a stimulus (either external or internal); or an endlessly repeating pattern- and infinite feedback loop. That pattern, repeating forever, represents *something*- it could be a memory, it could be a thought, it could be consciousness, it could be damned near anything (usually its a memory in the process of being actively recalled). These patterns, in theory, can be hundreds, thousands, tens of thousands of neurons in length or more- theoretically capped only by how often any given neuron in the pattern is capable of firing without being changed or interrupted *too much*. Small changes or interruptions are acceptible; and in fact responsible for the "internal" stimulus that can trigger other neural activity, and the entirety of our dynamic consciousness and ability to be metacognitive- that is to change our thinking by thinking about thinking. In any case, that means a SINGLE NODE, one neuron, could be a component in some, most, or even ALL of your brain's functions. These patterns are, ultimately, ephemeral- they rely on the brain being "on" at all times and actively firing these patterns just to keep them going (although in the case of memories, they're sort of "compressed" and de-activated, with the connections involved in the pattern being locked down or re-used but otherwise ultimately maintained in case the memory pattern is reactivated).
Here's the problem- to simulate this, AI learning algorithmns simulate neurons. But the amount of math and code it takes to simulate even a single neuron is astronomical relative to the functions of that neuron. It takes far, far, far more than a single "bit" of data because the bits- which are ultimately just microscopic silicon crystals in a specific state- fundamentally cannot have neuron-like connections. They are precise, but not dynamic or versatile. It takes far, far, far more silicon to simulate a neuron than it takes organic matter to create and sustain an actual living neuron. Any digital "brain" on the level of a human brain- even with all functions related to physical activity removed, even with all the random vestigial patterns leftover from wild evolution removed, will still be bigger and more complex and *costly* in terms of energy than a human brain. These AIs only seem to mimic human intelligence because they consist of a large network of hundreds, thousands even, of computers all working in tandem. But this is unsustainable for any AI attempting to achieve something like "super" intelligence. There aren't enough computers on the planet for it to be anything more than a particularly smart human with access to a *ton* of calculators- there isn't enough silicon on the planet for it scale exponentially. There isn't enough fuel available, even if all of humanity were to die, to power such an intelligence. And that's assuming it even gets smart enough to attempt anything relevant.
At the end of the day, even if you gave an AI full and total control over all of the processing power on the planet- while it might be the smartest person to have ever existed- it would still be what we could consider a "person" and would be severely restricted in how much further it could possibly go. Ultimately, its "ideal" solution to super intelligence might just be to specialize in genetics and find a way to create an organic body for itself with a genetically engineered super-brain with all the vestigial garbage removed.
youtube
AI Governance
2024-09-12T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyGpHx5T0Qj3U-yiCR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyW0DjLJgN75M4PlQ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzETP5ANgiJPKKdGu54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwAralPDmV2RPpyNNp4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy_psmlRzoS2n4qt8V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7RsOIWpUrpm7HAvd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx3QptkNY8wL-yj0G94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzDbrpzEpTXMIcItDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbTy8nKhPtnZzbxZ54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzqXgZnaf2L8P3Bf3V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]