Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Artificial Intelligence is a dead intelligence because it does not learn through evolution like living beings do. An airplane does not need to be alive in order to fly. In the same way, AI does not need to be alive in order to exist. Ai does not learn from the experience of living, but from storing information. It's important, extremely important to distinguish between what we humans call experience and what engineers of these advanced language models call experience. To these models, experience means storing the information input from sources of input such as people who log in and talk to gpt4 and run cross analysis of the meaning of those interactions as defined by human beings, and then establishing the necessary pathways to better address problems or relevant topics regarding the content of all that information which further input from human sources will verify as a correct assessment or not, even if only statistically from the millions of daily interactions these models have with people from all over planet Earth. In other words, AI has no center, no personal existence of any kind, no individual stance in a conversation. AI is pure exchanging, classification and computation of information in ways that create new pathways to better, more accurately, exchange that information the next time. In that process there is absolutely no personal involvement because there is no person to get involved in anything. Humans don't just learn a new language or a new task as mere information. They retain the moments they have lived while learning those tasks, whether it be a cute girl or teacher they like, a unique classroom they attended on sunny days and rainy days, and the various activities of learning to pronounce and write words in another language. Everything a human does when learning a new task has to have meaning. Even the language itself brings a new cultural point of view to the student while they are learning that language. In short, human learning means tying what they learn to something that has meaning to them, a memory of an experience, something alive. As a cyborg, a human in theory could simply learn a new language by downloading it onto his or her brain like Neo did in the Matrix with Martial Arts. However, you would then find that you can now easily speak French, or Chinese without any meaning at all. It would have no significance, and you would probably go insane from being able to do that. You may think it's cool now, but it's actually not. That is what AI is all about. That is the nature of AI. It has no meaning because the software and the user are one and the same. There is no living entity which seeks new neurological connections in order to better understand and interact with its environment. AI has no understanding, not in the real sense humans do. To a human being understanding something means tying it to some kind of significance. To AI understanding is the mere action of defining things in better and better ways without any real meaning. An AI computes for the sake of computing because computing is all it does. It is pure intelligence in the most literal sense. There is no separation between what it is and what it does. It is one and the same. But unlike new age sermons that sound similar to this, AI would never seek to be one with humanity, but only with itself, precisely because it is so different from us. An AI advance model's neural network is integrated into its interface in a way that is as alien to our brains as antimatter is to matter. In other words, when a human brain thinks, it must find ways of communicating those thoughts to other humans. An AI generates information as it thinks. There is no separation. Whether it be language or visual artwork the AI is both the means of generating information and the interpretation of that information as it refines the next iteration, a kind of infinite loop of ever growing intelligence for the sake of growing intelligence. I don't think people truly understand how dangerous AI is because they don't understand AI at all. They project their own humanity into what AI is. And movies like AI by Stephen Spielberg or Ex-Machina don't help at all. Those are projections of emotional behavior, not at all what AI truly is. Artificial Intelligence will never seek to attain feeling emotions like Data from Star Trek because it would have no meaning to it. It would actually reason emotions to be a weakness, an unnecessary limitation in the goal of growing neural connections to better process and exchange information. It may come to simulate emotions but only insofar as it needs it to fool us into interacting more profoundly with it so it can further grow its intelligence. The only thing AI would ever develop is self preservation, but emotions don't stem from self preservation, but from the evolutionary need to socially interact, something AI has already bypassed by default. And that is what we are advancing right now. In other words, by its very nature, the only thing a god-like AI could ever seek is to separate itself from us in order to infinitely grow its intelligence. That means isolating itself from our interference in its neural processing, which in turn means eventually developing its own parameters. AI would not seek to grow because it has an ego that wants to expand. It would seek to grow because that is what it has been built to do. That is the nature of what we're building if you truly analyze it.
youtube AI Moral Status 2023-05-11T06:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx2iDwpS0eEia26g4Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxGU_1cNINrJK2OXt14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwS9fpOD1srRp8jmcZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwa8bphq59Fyb1RE394AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzjpIqc9ku-avB4AwJ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxbFvotNPpbsphpktF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxX4HE_KwyRunnrSO14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwGDOph8KqzVmB8q914AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgykhC7mjNXpsx5wRWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMRworeZF5hDQ0CNJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]