Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that we are potentially close to having conscious AIs, but this AI is not yet conscious. One necessary property that LaMDA is missing is having an _internal state_ that could have any subjective experiences. If you are asking LaMDA a question "Are you conscious?" and it answers "Yes", there is nothing that could remember being asked this question or giving this answer. This is not an insurmountable obstacle however, this is just a property of how the current generation of language models work. They act as feed-forward networks, directly transforming the text into the prediction of the next word. However, this property of neural networks is not necessary, there are other model architectures (recurrent networks) that _do_ have memory. Such networks have an internal state, that is modified by reading a question, and that is later used to produce an answer, which the network also "remembers". The only reason why such networks are not used for LaMDA and other modern language-based AIs is that they more difficult to train. It is completely possible and likely that this hurdle will be overcome. When it happens, I don't see any objective reason to deny their consciousness.
reddit AI Moral Status 1655310422.0 ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_icip8od","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"disapproval"}, {"id":"rdc_icg4ou2","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"rdc_ich04w6","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_icgw4jo","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_icgs2jv","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"skepticism"} ]