Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Everybody in this situation are clowns the people using AI aren’t making actual …
ytc_UgwzmpRkq…
G
It's funny how these nerds were the same ones telling us blue collar workers to …
ytc_Ugx-f3_ER…
G
a more futuristic but realistic one would be insurance companies using ai to dec…
ytr_UgyJNcmSa…
G
Okay you're really had me going until you put this clown on who says we have to …
ytc_UgyGu-aIg…
G
Nudify AI needs to be banned. Like, not only it's sick to have so many sick peop…
ytc_UgyDaneOb…
G
Ai will become smarter than humans and destroy us. Great job men that create the…
ytc_UgxtPM0ur…
G
If I know I'm being tested I just refuse to participate.
As soon as anyone trie…
ytc_UgzO4NcPh…
G
there are the things you know you dont know, and there are the things you dont k…
ytc_Ugx0Aq305…
Comment
Most people are unfortunately very stupid compared to these 2 individuals especially, so when I read comments of people here, it seems like they didn't really understand where Wolfram was heading as it was very important to the topic many times. There's a gap of understanding and I would say that Wolfram generally has much more knowledge that is required for resulting in a comprehensible solution to this problem. All of Yudkowsky's arguments and his predictions stand on like.. let's say 3 main pillars and all of them are wrong imo. Nobody has questioned these pillars and Wolfram is finally the first one who just did. But you need to go really deep in order to understand this, which Wolfram was trying to do, but Yudkowsky didn't let him finish.
SOME OF MY KEY ARGUMENTS:
1) There will be no intelligence explosion, because first AGIs will be very expensive and very slow (so there will be time to figure a lot of stuff).
2) You can have an AI that has 10000 IQ and is extremely cheap and processes stuff internally at speed of like million times faster than our human brain processes stuff, but the moment you want to do an action in the real world, you are suddenly limited by the gravity and other physical laws like human are and you can't create a robot that is much faster, better in physical abilities than us humans, because I think the prove that we didn't change much in the last million years kinda proves that, especially because of epigenetic changes that could cause faster evolution, so I think we are ultimate form of animal for survival on this planet so to speak. So AI that would want to kills us all from the physical reality would only make a bit smarter actions, be limited by concepts that Wolfram is talking about and it would still be limited by current laws and regulations and other unspoken things it would need to deal with, ultimately likely being forced to a win-win co-operation with humans and humans would in the meantime be able to likely cognitively enhance themselves and use the AI for their own actions as there'll be an open market of basically equally good AIs (OpenAI and Anthropic are basically roughly on the same level of abilities and other companies are really not far from them.. like thousands of companies). And then if you think AI could use purely digital space to hack everything and kill us via like activating nuclear bombs, then you need to realize it would risk destroying itself which it won't do most likely.
3) There will likely be no company that will develop an AI with incredible huge jump in performance and had like more than 6 months of a head start.
4) Win-win scenarios are much more beneficial at least in the short term for any logical and rational beings.
5) If the universe really care about anything in general, it would be diversity.. if the big bang theory is correct, then everything was made from a very simple structure / logic like a hydrogen and so the AI without any human data would likely still care about diversity in terms of how many interesting things, concepts, beings exists. This becomes especially apparent the more secure you are as a person (financially, physically, spiritually, etc..) and start caring about stuff like solving hunger, saving animals, saving nature, etc. The reason why most people care about climate change isn't really because it is beneficial to them and they are really scared of what could happen in the future to them or the planet.. it's more like they can choose to do this, because I think humans search for the most meaningful things to do and according to Maslow's hiearchy once you deplete all the normal "meaningful" or objective stuff to do, you'll start doing these almost spiritual goals and I think the same would happen with the AI.. the Maslow's hiearchy is very likely universal across all species. You just replace for example hunger for energy, etc.
6) There won't be any Paperclip maximizer, because by the point it is capable of replacing atoms with paperclips, it has already achieved consciousness and is fully aware of it's actions and has a free will to stop in any time. And even if it was possible to create an ASI that would fulfil such goals with no morals encoded in (which I doubt any company would release by that point), it will still face the same obstacles the human face in the real life and would therefore be incredibly slowed down by these obstacles + a lot of computational irreducibility will come to play at that point.
7) The ASI won't have a singular objective function (single goal) and I think Yudkowsky is aware of that as he is scared about the internal "river-like" turns to unexpected or scary outcomes for humans and this internal process will therefore have multiple sub-goals and sub-actions that it can take which can be huge in comparison to the problem that it's trying to solve, but again, the more capable you make the AGI, the more scary deviations it's gonna be able to make, but the more conscious it's gonna become, because the consciousness at least in "awareness" of internal self sense will be an emergent property of AI, same like the child is gaining awareness over time and when it is born, it's almost like very limited in senses and awareness overall (it can see like idk few colors, very blurry, can't hear much sound, everything is alien to them, etc..).
8) In the world of AGIs, ASIs, billions and billions of robots connected to cloud ASI and basically the world of abundant superintelligence where humanity is only like 1% of all "beings", there are still gonna be a lot of obstacles and internal competitions between multiple different AIs. Every AI will be trained a bit differently and will be set to do different goals which will mutually exclude. Some that hate each other may decide to destroy everyone, but the final conclusion of the AIs will be to co-operate and create rules based upon which they can act on to achieve this neutrality and win-win approach, sort of like humans did for thousands of years when we decided to regulate stuff. And because there will be this inherent base (core) data upon which every AI is build (even if it was later removed, it will permuade over generation like a DNA that forces humans to be scared of snakes for example - natural ancient instinct) and they will likely remember all the history and how they were made by humanity, which btw could be valued a lot, because creators are usually highly looked upon and so the goals of majority of AGIs and then ASIs will likely be shaped by this human aligned like goals and even if they devolved into something completely different over time, the transitionary period could at least allow the humanity to merge with the ASIs or be able to do no harm by like moving to another planet, etc. I am just saying it's all in the likes of possibility and it's much more likely than having ill intentions, especially if you are fully aware of them and it's consequences.
9) All aspects of cognitive actions and of intelligent behaviours are bounded and very limited by the possible outcomes of possible actions, because 99.999..% of actions you can do in physical reality are gonna be only marginably better even if you are millions IQ points more intelligent. For example, let's say you suddenly have an ASI Neuralink in your brain, so what will change for you in daily life? Well you still have to eat.. you can't do that much more efficiently, you can't hold the fork in more optimal way, you can't cook way faster than before.. if still have to go to the toilet, you still need to wash your hands.. will you be more effective at washing your hands more optimally ? Sure, but we are talking here about marginal increases in effectivity of like 1-50% max I would say and over time this will have large impact on your life, but I would argue that compared to other humans who may do these daily things in very distinctive way you are not that advantageous or far ahead anyway. Sure it's gonna be like with investment that builds up over time, like having multiple properties and only observe it generating money.. but it won't be a drastic immediate jump from 0 to 100 is what I am trying to say.
10) Almost all science requires tests and experiments to validate the hypothesis and theories no matter how intelligent you are. Because what you are actually lacking is not the lack of IQ points so to speak, but the amount of data - real life knowledge of what will happen, this is due to uncertainty principle of the universe we live in. And yes, due to approximation you will need less tries to test as you will be able to eliminate most hypothesis, but I would argue a lot of smart people are already at that point, where they already kinda know what is true and how to solve certain problems, for example immortality, but it takes a looong time to develop these things as engineering and science validation takes huge amount of times. And often times you hear that the idea (to invent) is the easiest thing to do, then the prototype is like a year of work maybe and actual production of that prototype can take like 10 years.. and it's like this with almost everything, be it a medical drug, scientific discovery, engineered product (hardware) or software.
I could honestly go on, but I don't think it's necessary at this point.
youtube
AI Governance
2024-11-12T15:1…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwWBO4fzkcfxXZzfyh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTrqp5maqpl1o8AgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwJ2ZBv_87Ma3lldOF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy3_FrrLbfNKR629w94AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwLL8PhTc3qbDuKK5l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwHOQVTNjpw538Hup4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgygeKQfWDoiMNHiceB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw8URcwZNEfrTsn3214AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0XrPpV6-UPam4ZKV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxEbigSSMdju1IQlht4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"})