Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Old-style software engineer who majored in AI back in the 80s. In short, no. Consciousness is way more than regurgitating popular facts from a huge repository of information. What the AI crowd learned in the 2000s (from the developmental psychologists) was how to allow a program to set its own parameters for prioritizing one piece of information over another. The major successes in this were over seemingly innocuous elements such as recognizing grammar constructs and contextualizing language constructs into concepts. Everything else is just a question of modelling prioritization of information, which quite frankly we were doing in a nascent form in the 1980s. Intelligence has a pretty low bar when it comes to determining machine capabilities. The fact that ELIZA could superficially mimic Rogerian psychotherapists in the 1970s should be clear enough. ChatGPT is effectively an extremely sophisticated version of ELIZA. Both pass the Turing test for intelligence. But the measure for consciousness is rather more difficult to define, much less grant. The problem is further exacerbated by careless use of terms. A machine may be "sentient" in one sense, and we can see this in the remarkable work done in robotics and optics. But being able to sense environmental factors does not mean that one is self-aware, which is a crude short-hand for what most people mean when they talk about consciousness. Finally - and this is going to be the real downer on the whole endeavour - is that we're in the very start of conversations about free will and determinism. Under these two paradigms the nuts and bolts of consciousness are viewed in completely different ways. Yet the choice of one paradigm over another is essentially made on an emotional rather than a rational basis. You can see this by the fact that - despite their protestations to the contrary - that many died-in-the-wool determinist atheists don't actually live their lives or form their societal perspectives on that basis, but instead act in the world as though free will actually exists. Furthermore, any claim to choose one over the other tends to be axiomatic rather than a rational derivation from other precepts. If we can't find a way through this impasse, we'll actually have to use at least two separate measures for consciousness if we're going to get close to answering this question at all. To be more specific about ChatGPT, the reason why it can't be conscious, regardless of whether you're into determinism or not, is that it is not embodied. Its interface with the outside world is, basically, information stored on the internet. There is no centralized identity that can be formed until a being can self-reflect, and that cannot happen until you can "see" yourself in the third person. We can see this when animals view themselves in a mirror, or view others of their own species. That self-recognition, and self-categorization of the self as part of a type of being, is the beginning of self-awareness, which until embodiment is achieved will merely be illusory.
youtube AI Moral Status 2026-01-28T16:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzpC_sSTdCABRcpkcB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZBIHr1lIRG8Da4BZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzlbOgf_YXMfk0KUW54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhvsPs_HukQQKjnhh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx907P2HV9Jdpi2lLJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz576o4cXWfR8xOpRp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxnzx6dnIVgf8w6Okp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-b0bTRNWx8VKX9d54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwv8EFs_jnNfnSU_ax4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyNeYV78JpvKIi3PNt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}]