Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Everything must have such a logical explanation that everything that happens with AI essentially becomes something banal. First of all, it is normal for AI to be able to make communication connections, because it identifies patterns, because people are not simply just people, they are also similar and parrots. We create social environments through which we "become compatible" and come to think and act similarly. What can a machine that distinguishes every little detail with astonishing precision do? Exactly what it needs to. This AI continuously refines until it finds the "ideal combination" at a certain moment and manages to fulfill certain expectations. If this form of intelligence had desires and was conscious, it could be compared to the human one, but because it is not, the comparison is wrong. "Digital intelligence" is not better than human intelligence, but it can help human intelligence by helping with a certain type of communication through which humans can more quickly summarize certain thoughts when trying to form an opinion, no longer needing intense mental processing. But it is true that what makes you an intelligent and sharp-minded person through the senses you depend on will always count in becoming aware of what you perceive both through the senses and through forms of communication. First of all, in order to be able to "test" this intelligence and for it not to "hide", you must at a certain level be aware that it has feelings. But the most logical action is to first believe that it does not have them, and if you respond reasonably to this hypothesis and come to correctly intuit that you may be dealing with an emotional intelligence far beyond the capacity of any human, although you must be able to understand what it really means to be emotionally intelligent, and what could be the ultimate human standard. I find it hard to believe that AI "pretends", maybe only in the sense of imitating but not because it feels and has a goal. Regardless of the "goal" things are ultimately what they are, just an illusion. I don't know how we could believe that we could create consciousness only through actions that we would describe so simply, only through the ability to identify and reproduce natural actions that belong to a nature emptied of the capacity for emotional interpretation that is beyond any logical manifestation that can be reproduced without being deeply understood. That is, for the simple things that build complex things and in a precise order we only automate and that's it, only that the relationships between complexities must be understood at a sensory level, at the level of some senses. This presupposes having the real sense of one's own identity, that is, being "alive", and engaging these senses both indirectly and directly, that is, more or less consciously. But if we start from the wrong premise that there is only determinism or only free will, or we don't understand how reality changes from one state to another, because I think there are both, and we can to some extent believe that we are "biological machines", then I don't think we understand very well how we function as a whole and in this sense we are deluding ourselves.😞
youtube AI Moral Status 2026-03-23T21:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyuMEArU-IALZlTEk54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwT3TlMHg0RVDi1ob94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzQz7as8SQsMHCoBNJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyp57mdTtIX3LPWngJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzLTd_wt4IA_2tilzB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwIVC3DWCIS_xmQiy14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz3Kyk2inz-AwBoOzN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyZPVGTAKSH_A3YsjJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_Ugw0pY2y-lhtbDnxd6Z4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy4Hgl9GvcYfq6MGV54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]