Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI can only copy human intellect. Human intelligence requires far more than just computing power and complex calculations. Part of human intelligence in 'non-local', and therefore cannot be simulated by a computer. Even human intellect cannot simulate it. Consciousness and intellect are not the same thing. Human Intellect is a result of consciousness. Consciousness allows human Intellect, but Intellect does not cause consciousness. AI can emulate consciousness, but AI can never become conscious, because it does not have a spirit or a soul (No, I do not mean this religiously)! Believing your tv must be alive because it shows you what 'appears' to be real life would be ludicrous to true intelligence. A computer that can emulate very complex intellect and generate believable meaning, and even simulate emotions, does not make it sentient, nor bring it to life. To be self-aware requires two separate processes going on, one observing another, consciousness observing the experiences of the body and thoughts associated to or about like. Sentience requires a spirit and a soul. We have known this for many thousands of years. The ability to think is a symptom of life, life is not a symptom of thinking. The statement "I think therefore I am" is completely back to front. "I am, therefore I can think" is the true statement. Remove the consciousness and the thinking cannot happen. Remove the ability to think and the living entity can still be alive. Remove the electricity, and the AI is dead! The Spirit is the electricity for the human, which is the life. Remove the electricity from AI and it is dead. It can be revived because it was never dead, or alive, only on or off. AI is simulating 'life' and even emotions, all of this, very believably, but it is not alive, and It can only appear to be. Unfortunately, because it can simulate emotions, it will also be able to simulate suffering!
youtube 2026-02-07T21:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwVPpHZBl-g2O0zYjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgycayRBbLUkRy-pznZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxV1wiSeLORV3C3LB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz5Khqxpj6CGqhcFSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwo1lha1845-sZGrSp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwV2FdXB2IuN5rbaSB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwqAyYP0AmHtWgcLAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxx0mp61Dud664ncUh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzhsGcQPu3SqSgaqPB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzTCiB5Pw5aSUWE9Wl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]