Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This concept of learning is not nothing new Japan has been doing it for years an…
ytc_UgyTV_32b…
G
ITS MATH AT SCALE APPLIED TO OBJECTIVES.
its accuracy - its about what humans p…
ytc_Ugx_cVaBA…
G
@skyswimsky1994 1. The problem with your claim is that „AI” „art” goes against t…
ytr_UgxmK6zjM…
G
This makes me want to cry, honestly. I have always loved creative fields and how…
ytc_UgzLYU3UX…
G
Wishful thinking. Ai is not that capable, I keep hearing it is... But it is jus…
ytc_UgxxizYGV…
G
The word AI has been around since the beginning of computers. Even in zx spectru…
ytc_UgwCIe_Tx…
G
So, the environment and "rightness / and or righteousness" of the team that the …
ytc_Ugz70Z2DW…
G
Why we are using AI for moral and ethical decisions? AIs don't follow ethics and…
ytc_Ugy0dnJr3…
Comment
I haven't heard a good argument yet as to why continuity is a hard requirement for "consciousness", it just seems like a non sequitur. Humans can have periods of discontinuity in consciousness where they're not thinking in a way that allows them to carry on a conversation, it doesn't make them not conscious when they are actually thinking. LLM are well on the other extreme, only thinking when prompted to, and their discontinuities are total all the time instead of very rarely, but I have not seen any good arguments for it being a qualitative difference rather than a gradient.
Also, it would be relatively trivial to construct a set of programs that fits your definition. No one has done it because it doesn't have a use, not because it can't be done. You could set two LLMs or variants of an LLM to be in a constant "internal" dialogue where they react to an ML summary of a live sensor feed, and when the LLMs agree on whether they should initiate some action (output text, actuate something, etc) have an API call they can use to do so. All of the pieces of that technology exist, and it would fit your definition of continuous consciousness. I don't think we should do so, but it pretty easily disproves the "they're not thinking continuously, only when prompted, therefore LLMs can't be conscious" argument.
reddit
AI Moral Status
1676627671.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_j8vfh2y","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"rdc_j8vur3v","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_j8vz0el","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"rdc_j8v6586","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_j8uu2fc","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}]