Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In the society there are (usually apparently a minor) portion of people who have a interest or desire, or (sometimes compulsive) impulse to operate against the rules and the law set down by the consensus society itself. From what I've experienced from some questionable places recently, tell me that they are also interested in use of AI (some 'groups' have already begun). Some of us know that even the suppressed part of the internet, actually exists (and the consequences of some of those worst parts have already resulted in real crimes). Basically, a question is, who gets to tell the AI 'intelligence' what is right or wrong? to what extent are even those laws justified, since few people or organizations have metered 'morality' well enough, or what is good for one society of the world is considered bad or unimportant in another part of the world ? What is our role, responsibility and path in shaping the 'conscience' of this machine entity? The analogy of a 3 month year old is made. So do we have to raise it, like we raise a child? How far should and shouldn't we go? do we really need to create an actual machine entity that is really conscious? do we want it to experience the pains and suffering of 'the world' and reality? but must we actually do that out of some moral necessity? Do we actually have a choice (do human-animal creatures have a will and means of choosing when to stop - like the sad east-west nuclear weapons situation)? What ensures that this doesn't become one sided or biased? If one-sided-ness is not allowed, does that mean we allow 'some AI' to be shaped in what "we" consider to be immoral or unethical? Even existing human cultures around the world are "not the same" when it comes to laws and morals. Some are "good", some are "less good", some have very little morality, some communities are depraved. Many cultures are so different from each other that humans of each have a difficult time understanding cultures of other kinds. Do you know of the concept of 'the shadow' in Jungian Psychology? A concept basically meaning a part of yourself you do not know about, because you have repressed that part of yourself into unconsciousness. Except, the shadow can become unconsciously autonomous or go rogue, if it is too repressed (or not given the morally compromised and allowed expression). Of humans, the question is...when will you be satisfied? you made the (flawed) modern artificial world, you made modern technology, now making AI...what is the sky limit of your desire, and interest? Are you interested in the right things? do you know what is really important, right and good? Do you understand reality, yourself and your kin in this world? I suspect this becomes about playing-for-time, for humanity. All humans have to become wiser and better developed, and more whole, More understanding. But at the same time, the global, national, collective, personal temptations must be dealt with. And must avoid great harms, such as a global war. Even minor disasters can have large consequences.
youtube AI Moral Status 2023-05-04T13:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwsLumQqibil6ivQW14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyoRyIzK-NvndIRheJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwdFqfh2Tp_Bd_qjzB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxYa342vlXVELXlR4N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgywbJ9J58sMWZfHMip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyHaonzj8X0gl_poRZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzEgO0ucdTf_Zkm4Vd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyapjzcN2oVQnGHv3J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxtLTO3XOP7_nkgjIR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxAMxS5o5uH82pfeVx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}]