Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@41-Haiku 1) There's only one way how to do recursive self-improvement in LLMs (and other types of AIs are not gonna be as efficient in general), but the way how to do this self-improvement is still gonna be vastly via training or finetuning the model with the new dataset that is synthetically AI generated. And this type of self-improvement is very slow and very expensive. I suspect this will change with upcoming special new form of finetuning, but still it won't be as explosive as people say imo, because once you have the model trained on the highest quality dataset, in order to continue you need to increase parameters of the model and also training compute time.. so we get back to where we were before. And my reason why I think first AGIs will all be very slow and expensive is because a lot of AGI is in the reasoning, planning, solving problems, doing actions part and that costs a lot of money on inference, so much that OpenAI wants to possibly increase their subscription to 2000 USD a month lol. And don't forget this type of additional "thinking" or whatever you call it adds a lot of seconds.. later on minutes, hours, etc. Depends on the complexity. 2) Tradeoffs are not universal to biology. You can't look at everything in terms of a competition or a war lens. F-35 will probably be better than you on a battlefield, but not in everyday life that AIs need to compete in as well.. and even if we had such powerful AIs, you really don't think they would be severely regulated and controlled by that time? Even Eliezer is not worried about this as much as the danger seems to be in the mundane things that we humans can't fully predict. 3) Well even if there was, there are always Chinese who copy that thing in a month lol.. they actually have AIs that are on the same benchmark level as OpenAI and Anthropic currently and almost nobody knows about it. And also there's always gonna be a lot of incentive to do open-source AI that's honestly not that behind and that's what should people scare way more imo. Because one thing is to control 1 ASI as a single company, but imagine open-sourcing that.. 4) Sure, but I was talking about the AGIs of the future. They might not have emotions, but they are gonna be logical and rational. And you say that major players will have a say in this global game, but I don't think so. OpenAI has less and less say and way way more competition than in the year 2020, since then the amount of competition it has increases exponentially and now even becoming obsolete in many areas, for example the famous Sora text-to-video that didn't even release yet.. well we are starting to see models of better quality and actually open-source. So my prediction is that there'll be millions of different AIs and companies, each with unique ideas, datasets, their own models, etc.. It already kinda is like that, it wasn't just 4 years ago.. it was a different world. 5) Nah, this may sound like philosophy, but it's the principle behind cellular automata actually. The fact that from extremely simple things (rules) everything complex was created suggests that the world will continue developing in that direction.. that complexity is what I mean by the word diversity. It's why we treasure and value rare things, rare animals and keep them safe. 6) Well even if it didn't have whatever you classify as a consciousness, it can still have a layer of awareness where it is able to recursively observer it's own previous thoughts and actions and based on that as a feedback loop decide whatever change. Kinda like a free will mechanism, although I know that you've probably read too much into Determinism or Robert Sapolsky (brilliant guy btw) and you decided that free will doesn't exists.. well that's a complicated topic, but I am pretty sure it has to exist due to how I think quantum physics works. But anyway.. what I am saying is that you can simulate the behaviour that has the same function and properties of a real consciousness or a free will. And if you have so powerful ASI that it can literally change atoms to anything, well by that point it has gone through so many iterations, years of improvements, many regulations, many obstacles, to even get there.. so it having no such thing as a simple self-awareness or self-acting seems like a really inefficient AI and would not likely classify as ASI. 7) I am not anthropomorphizing, because in my world the self-awareness algorithm isn't as complex as you think it is. Neural networks are essentially very close to how human neurons work and it also proves the fact that we can now run the same AI algorithms on real human brain cells in a lab.. just more efficiently, but the output seems pretty much the same. Also it's funny that you think I don't know anything about this technical field.. oh boy... you would be surprised lol. 8) As long as there are more humans than AGI robots, I think we still have a chance to dominate in the physical world and keep some level of control, but in the end I do agree that we might lose control completely, but especially due to how game theory works, by that time there will be majority of AIs developed with good intentions by humans and by companies and if a bad guy develops a bad AI and tries to kill everyone with it, well guess what happens.. AIs vs AIs.. cyber war or real life war, either way.. the majority of AIs and their ideology will fight against the ideology of the "evil" AI and put it in the virtual prison so to speak. And yes, I am aware of all the problems related to this.. like what if the AIs will be fighting with nuclear bombs with each other.. or what if the evil AI somehow releases a super virus without a detection, etc.. I think there are solutions to all of these problems, especially if you have majority of AIs on the humanity's side. 9/10) I agree with you that there's "some" headroom above humans and I used to thought like you.. that the ASI will be able to do anything and invent crazy stuff we can't imagine or understand even in the wildest dreams.. but later on I realized that there will likely be no technology that we can't comprehend and also the rate of scientific breakthroughs is now slowing down, which indicates that we may be in the end-game, only couple of revolutionary technologies to discover. Even if this wasn't the case though, well then the scenario you are giving has a lot of additional context like the need to fight. But the reality is that ASI would be super efficient compared to us and instead of killing us which requires more energy, it would rather co-operate with us via diplomacy or whatever. The examples of history like with the Native Americans are faulty in that sense that they couldn't really communicate with each other, but we would be able to communicate with ASI just fine. Language is often the only main thing that unites people from different countries. Many borders of the countries were set because of the language they spoke. And also you have to realize that even if we created very powerful AGI that can do everything we can do, but they are smarter.. like Einstein level let's say.. and let's say it's in a robotic body so it literally can do any job a human can do.. well what you just did.. you've created a person.. yes.. very smart person, but still the population just increased by 1.. what is the outcome of that in the global sense? not much.. so unless you have more than 8 billion of these AGI robots.. then we can talk.. and btw, to construct 8 billion of these robots, you can have infinite IQ, but you can only construct as fast as factories, instruments, logistics, materials, cash flow, etc.. allows.. which means.. it will take at minimum like 20 years anyway to do that lol.. and that's a lot of time to think about super alignment issues.
youtube AI Governance 2024-11-13T03:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugw8URcwZNEfrTsn3214AaABAg.AAkABDZKPanAAl3aj25n7W","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw8URcwZNEfrTsn3214AaABAg.AAkABDZKPanAAlUZwau012","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyDwm68EKd1Nc_fPFZ4AaABAg.AAk8gYZ6NDzAE9hea43ox3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwPdz_tCwa6_6XFV3R4AaABAg.AAjwvAwOg4bAAl40pScScB","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzitJmABghrTth4fa54AaABAg.AAjl4Xw44_lAAl4QT7Ig8o","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxPIOza9X46ztg-tHN4AaABAg.AAjZUj4e5ICAAlBnlwRo2t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxPIOza9X46ztg-tHN4AaABAg.AAjZUj4e5ICAAunq4hXREw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyezV4olXojezGLqQl4AaABAg.AAjLN1MGVd-AAlA9zKu0dB","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugz-B4NOFCx3uGYQj8l4AaABAg.AAj96YI3NfYAAj9MIaoBhH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugz-B4NOFCx3uGYQj8l4AaABAg.AAj96YI3NfYAAjF_DhPzkM","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]