Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can lower myself to their level.
I'm not an artist. Tried a few times but my …
ytc_UgyUZIyEZ…
G
I'm not defensive about anything. I don't like AI for the same reason i don't li…
ytr_UgxJZoqEb…
G
The ONLY time ive used ai was to make funny images to laugh at the weird forms w…
ytc_UgzvFOzwP…
G
Police can ask Waymo for videos from their cars. It is more of Big Brother watc…
ytc_Ugx4q0t5A…
G
No offense but your memey Breaking Bad AI art sucks compared to AI art that peop…
ytc_UgxfBaIvk…
G
Or just don't use generative AI since it steals things from people, doesn't give…
ytc_UgwSHMcbB…
G
fire rises no that’s not how it works. AI doesn’t behave in a deterministic way,…
ytr_UgyoQAW2F…
G
@Tanufistrying > *just illegal datapacks*
Just stating it IS for a fact illegal …
ytr_Ugz8oiP_V…
Comment
This idea relies on the implicit assumption that "consciousness" is entirely defined by behavior. I don't find that compelling.
Suppose you had a word generator that returned sentences composed of words selected completely randomly (note that I am not at all saying this is what LLMs do, please stick with me). This word generator was involved in an endless series of conversations until it's random responses perfectly fit the conversation, purely out of luck, such that the behavior implied by it's responses is indistinguishable from conscious, human behavior for the duration of the conversation.
Would we say that the random word generator was sentient for the duration of that sole conversation because it's behavior was perfectly aligned with that of a human, and we know humans are conscious? Certainly not, and we would reference the mechanism of how it engaged with the conversation (perfectly random word selection).
So, by contradiction, consciousness cannot be solely defined by behavior. There must be an understanding of the mechanism that drove the seemingly-conscious behavior to determine if consciousness is indeed present. Since we still do not know how to define this even for humans, I don't think it is possible to reach a strong conclusion that any LLM or AI agent is (or is not) conscious. In my opinion, it is more likely that the LLM is closer to the perfectly random word generator used in the example than it is to human consciousness.
reddit
AI Moral Status
1739930226.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mdj5g30","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdinz3v","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_mdj0zmu","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdjij7m","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdjnfdi","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"})