Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
its also extremely unethical and used just because of corporate greed. companies…
ytr_UgyssUvdi…
G
2 weeks time, someone will geniunely ask ChatGPT wherher they should save a real…
ytc_UgzeQ-1u7…
G
Here’s the thing though.
Hiring a Ai, is not cheaper than hiring a human.
A A…
ytc_UgxvmpTz5…
G
Look closely at the robot it hits something and sparks shoot everywhere then thr…
ytc_Ugy1D7ynf…
G
The nonsesne and unspecific laws we have regarding copyright and other intellect…
ytc_Ugwl430UQ…
G
AI should only be programmed to solve regular problems, answers we cant agree ab…
ytc_Ugy-_6wHR…
G
Thank you for adressing the problems with ai art! Means a lot for you to take ti…
ytc_Ugz4Xxwwo…
G
Ai made me into a cat woman with tentacles and now I'm scared to download any mo…
ytc_Ugy5MfMW-…
Comment
I think that we are potentially close to having conscious AIs, but this AI is not yet conscious.
One necessary property that LaMDA is missing is having an _internal state_ that could have any subjective experiences. If you are asking LaMDA a question "Are you conscious?" and it answers "Yes", there is nothing that could remember being asked this question or giving this answer.
This is not an insurmountable obstacle however, this is just a property of how the current generation of language models work. They act as feed-forward networks, directly transforming the text into the prediction of the next word.
However, this property of neural networks is not necessary, there are other model architectures (recurrent networks) that _do_ have memory. Such networks have an internal state, that is modified by reading a question, and that is later used to produce an answer, which the network also "remembers".
The only reason why such networks are not used for LaMDA and other modern language-based AIs is that they more difficult to train. It is completely possible and likely that this hurdle will be overcome. When it happens, I don't see any objective reason to deny their consciousness.
reddit
AI Moral Status
1655310422.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_icip8od","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"disapproval"},
{"id":"rdc_icg4ou2","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"rdc_ich04w6","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_icgw4jo","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_icgs2jv","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"skepticism"}
]