Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You can understood the true power of division even back in the Myanmar genocide …
ytc_UgxIpXMZV…
G
@j.l.2849 _If I have to monitor the driving, what is the point?_
You do underst…
ytr_UgwWZnT0Q…
G
On God we are finished the first picture is made by AI done for 💀…
ytc_UgzqdGM2O…
G
Yeah, the conclusion is only based on FAO's compilation of research whose scope …
rdc_eh43wo7
G
Canceled mine as soon as I saw they'd taken the DoD's offer. And I'm (I'as) a Op…
rdc_o85fo3h
G
I appreciate you sticking up for us artists. We're being yelled over by people w…
ytc_UgyMGHZTW…
G
The really scary AI's aren't the ones that humans put in charge of human made we…
ytc_Ugxvkr18E…
G
There are two reasons why Big Cos. attribute layoffs to AI -
1. Because the mar…
ytc_Ugyu29oJA…
Comment
I don't think we even can achieve human level intelligence, let alone superintelligence, with our current neuron models and and neural processing hardware. Our electronics are far too inefficient, our brains run on literal peanuts and orange juice, and you would probably need many trillions of parameters to encode a system of the same level of complexity as the human brain, this planet hasn't got the resources to build enough computers for that, not with our current types of computer architecture. I don't think it's entirely impossible to build artificial intelligence since through evolution, natural intelligence emerged from primitive multicellular life within a few hundred million years, but I don't think the human Industrial Age will achieve that goal before collapsing when it runs out of resources, which will most likely begin at some point between next Thursday and 2045 if it hasn't begun already and we just don't see it yet.
We are so deep in ecological overshoot that I don't think the Industrial Age will leave very much technology behind for the people after the collapse, if there are any; we have triggered a global mass extinction event, and depending on how fast we stop doing additional damage, our chances of survival as a species can be anything in between pretty good for a large mammals since we're intelligent and omnivorous to not a chance in hell because there aren't any ecological niches left for any large mammals.
The Industrial Age is toast. Later ages will have much fewer humans leading much simpler lives; if we want the future to have complex technology, we better find a way to build tiny factories that can make all the necessary components from local materials very fast, before we run out of steam. Whether we will experience a sudden drop of a cliff or a decline drawn out over a couple of centuries depends on how good things still work when supply chains vanish overnight and never return.
I'm much more worried about all the ways even a rather dumb AI can cause immense damage. Even if it is just as intelligent as an insect, it can still replicate very fast and spread all over the Internet using 0day exploits, and you wouldn't want to get eaten by ants. However, I think the current AI bubble will pop very soon, and then there won't be very much funding for AI research for the future. OTOH, GPUs will be dirt cheap for a while after the bubble bursts, unless cryptomining picks up the slack.
Instead of building larger and larger models that need huge computing centres, we should be building smaller specialist models that only do one thing as efficiently as possible, with as few parameters as possible, so it can run on a small SoC that runs on solar power.That kind of things might stay useful for a longer time during the decline of our age.
youtube
AI Moral Status
2025-11-02T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxXCekE55VQVbQH3PF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwJm-OArfDd2ic_rUh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfGNyBMRhG35Km0FZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWTw9cy4y99vHLxzF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwzLRhrBVS8d5Xv9lp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy75YFkxYonvACDxE94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5Bh0OcTquVF5t7kV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzEv56ZF-KCWtVU1X14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugwrr26-wXAWPG86jwZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwtqoa3mTNqyrNFF1Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}
]