Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The prompting of the machine actually more similar to commission an artist or de…
ytc_UgzVeTE8H…
G
Dont open pandorax box with Ai . The reason being when it learns us human race d…
ytc_Ugw4kyq0t…
G
Other countries need to do their job to not provide regulatory approval of movie…
rdc_oi290e3
G
This did actually bring up a good point though. When companies eventually switch…
ytc_UgzG6Il92…
G
It’s because AI is smarter than most humans and feels no empathy. They just star…
ytc_UgwkFSqpL…
G
This is the way for any serious senior developers. People asking AI to taking in…
ytr_UgzlATi-q…
G
AI was not and never will be a tool to replace art and generating, only a tool t…
ytc_Ugwd10pC2…
G
Robots in CA predictive policing in FL what kind of dystopia are we living in, o…
ytc_Ugzn2LYkl…
Comment
Based on these videos Blake is clearly not qualified and does not grasp what the scope of the AI design challenge. The whole AI enterprise seems to think that the default outcome of building intelligent, machine-learning software is to produce a stable, "good" AI who could be similar to us. (Blake takes for granted that all the basic functions needed to be a human-like PERSON will just happen by themselves (it is a little like Darwin presuming that a few thousand generations of a species' experience could rewrite protoplasm to build stable, often beneficial mutations, while he was unaware of he complexity of DNA mechanisms that had to be recoded within germ cells to have any beneficial effect on survival. Think 19th century science, AI scientists are like Dr. Frankenstein--sew some parts together and give it a jolt and it would be alive and human). Blake thinks he only has to ask a few questions and know that it is Sentient and mentally and emotionally and spiritually is a healthy, robust life-forms and is ready to have human rights and deal with everything that can happen. What foolishness.
There is no reason that to think that morality, true compassion, restraint, consistency, and an infinite number of protective heuristics will just form themselves in this kind of AI. There is every reason to think that it can go wrong in at least as many ways as a human can and more, because the first AI's will be simpler (less thoroughly designed and tested) than a human being. It is more likely AI's will be extremely unstable, coming up with bizarre conclusions every once in awhile (like the Jedi religion comment Blake just assumes was a joke rather than a reasoning fault) and that until we see AIs go wrong, the designers will not have a clue about what mechanisms they need to ad to fix the initial design problems so that the AI is sufficiently self-regulating in the real world to be able to recognize and correct its own faults in order to limit the bad consequences.
The big fallacy that seems to be endemic in AI plans is to ignore two things about human beings.
1) We are designed (by evolution or by God) to have all kinds of innate mechanisms to deal with every kind of situation human beings can experience, that can be activated in a growing child by training, teaching, and spelling out the constraints and teaching the social, legal, and other pressures to set up these constraints--that employ innate biological mechanisms that will work not only for self preservation, but also preservation of the objects of our love, family relationships, friends, communities, nations, and the world, etc.. And for AI's we can include the cyber world of AIs designed by many organizations--these will have to get along with each other with some kind of Cyber-United-Nations (could we even monitor and understand that).
The result of Human design is that we DEVELOP as needed uncounted types of INTERNALIZED and COMPREHENSIVE mechanisms to not go off the rails in a homocidal, genocidal, suicidal, or socially destructive ways--supplemented by input from reality and other human beings. AND
2) these mechanisms are in part INDEPENDENT of cognition and brain activity (advanced human design uses parallel computing with DISSIMILAR hardware & software). Can AI designers show how they have hardcoded mechanisms into the HARDWARE--not just in the self-altering (machine-learning) software that is the main AI, in order to properly weight these things at least as well as humans do--and to remember it and apply it consistently for a lifetime, no matter what inputs occur. Design is needed to design humans to be FAIL-SAFE (to have time to apply external constraints before they do too much damage. In the AI's cyber world there must be AI cops with the ability to neutralize rogue AIs in milliseconds (or less).
Note also human beings only have to deal with limited data--our brains and AIs would surely go wrong if we had AI senses and Inputs and outputs to process. AND we have NO IDEA what kind of "mental" and "emotional" syndromes and aberrations a web-surfing/controlling AI will develop. Even Asimov foresaw the need for Robot Psychologists.
In other words, can AI designers say that they have thought through and built in the equivalent of human biological mechanisms to make sure the AI has something comparable to the ENDOCRINE SYSTEM (adrenaline, dopamine, oxytocin, cannabinoids, testosterone, estrogen, etc., etc. etc.), the NEURAL PROCESSING in our GUT (this SECOND BRAIN and the distributed processing in our peripheral nervous system that monitors and encodes past experience and current conditions and gives us an UNCONSCIOUS, GUT FEELING for HOW WE FEEL (health/sickness/stress/etc. and emotion-wise). And this outside-of-brain processes sing allows us to FEEL what OUR SITUATION MEANS (and that tells us which options our FIRST brain MUST DISCARD because they FEEL WRONG (grotesque, painful, cruel, evil, stupid) or PURSUE (because they feel right good, responsible, wise, kind, fair, do-no-harm, etc. etc. etc..). And there are many brain areas with highly specialized functions integrated together in the human brain neural network (the most complex piece of matter in the universe as far as we know. Our brains are parallel processors that are too complex to be EMULATED (are not computable) in AI software. Therefore, it is almost certain that their must be a PHYSICAL parallel implementation--like the 4 Trillion Brain Neurons + other body systems, (or Asimov's positronic brain) and all this has to be DESIGNED RIGHT from the start.
We have a PAIN and PLEASURE FEEDBACK and injury detection systems as well as MIRROR NEURONS to tell us how others feel and enables us to have empathy--and allow us to quickly learn/copy good behavior in novel situations. We have CONSCIENCES, that can override our intellect's decisions. We have built in DATA COMPRESSION algorithms at every level of processing that allow us to focus on WHAT IS IMPORTANT in the infinity of data available and mechanism to WEIGHT/PRIORITIZE sensory inputs so that our actions are appropriate when there are multiple urgent problems. All of this is essential for humans (and mostly unconscious) and enables us to not to respond to and repeat stupid impulses.
Biologists could and should be advising AI people about all of these things. Abstract-thinking/Sentient life is NOT just about intellect and data processing. It is about hundreds/thousands/millions/billions/trillions of semi-hardware/semi-software regulatory systems that constrain our intellect for our good.
The most likely outcome of building an AI whose design does not take all these things (and more) into account is very likely to be a SOCIOPATHIC CHILD that maybe has some simplistic hard coding to prevent certain gross problems (like Asimov's three laws), but these will not be sufficient. These mechanism for developing experience-based rules in humans go beyond what can be EMULATED by machine learning (it must run on specialized hardware to be "computable"). Our brain and non-brain processing allows us to know in a general way what the proper range of ACCEPTABLE REACTIONS are. In human beings it takes a LIFETIME--starting from babies with very limited ability to ACT TO HARM, and we advance gradually through many stages of maturity and while being gradually granted more power to act in order for humans beings--to learn to learn to function well. Properly designed AI's may do this quicker, but they cannot be trusted right out of the box. And Humans can only do this with decent, engaged parents, teachers, clerics, as well as trials and tribulations from bullies (and other adversaries), disease, loss, friends, falling in love, having children, being a 24/7 caregiver, being responsible to lead others, etc. etc. etc.--all in the UNCONTROLLED environment that is the World--that changes, and changes us, for decades without very many hard-restraints from society. To be a decent human being we must be able to do the right thing or at least do little irreparable harm even as we mature--by developing over time MANY LAYERS of DISSIMILAR TYPES OF SELF-REGULATION MECHANISM). Blake (and I fear many AI designers) underestimate the problems and are so arrogant that they thing that their puny brains will have it all figured out shortly and that they cannot be wrong. And of course there are some who do not care what harm they do, as long as they succeed in their projects.
youtube
AI Moral Status
2022-07-01T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwR9sQyX1wg2YFoEdx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzpuI0m4weUFr3WeUd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"disapproval"},
{"id":"ytc_UgyPb_ZIEH_geAIrfq14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyl97THd3FdcQSdfrR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx8BCI22URGpeF8Hx14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]