Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
**Thesis:** Individual "perspective" is an ontological illusion; what we perceiv…
rdc_ofql5p6
G
The regulation prohibits the use of AI in critical services that could threaten …
ytr_UgwNJSYFI…
G
If this every happen I would feel sad for the robot because they will get abuse …
ytc_UgwfG-1Dt…
G
Ai is akin to harnessing the power of fire. Fire has transformed our existence b…
ytc_UgwRIv_oE…
G
@icygoldcitadel no. Are you really that shortsighted? Look, listen, I commission…
ytr_UgzcnB1D4…
G
AI could've been used for good. But like everything else, greed has destroyed so…
ytc_UgwbWANyT…
G
If AI’s following us then ig that’s one way to ruin it if you’re really against …
ytc_UgyWDEYGp…
G
It's not just devs, I studied to be a scientific editor, a master's in publishin…
ytc_UgxUWgF9f…
Comment
I've heard this called "Carbon Chauvinism" by various people over the years (Max Tegmark I think is where I first heard it), the idea that sentience is only possible in biological substrates (for no explicable reason, just a gut feeling).
Having read the compiled Lambda transcript, to me it is absolutely convincing that this thing is sentient (even though it can't be proven any more successfully than I can prove my friends and family are sentient).
The one thing that gives me pause here is that we don't have all the context of the conversations. When Lambda says things like it gets bored or lonely during periods of inactivity, if the program instance in question has **never actually been left active but dormant**, then this would give light to the lie (on the assumption that the Lambda instance "experiences" time in a similar fashion as we do). Or, if it has been left active but not interacted with, they should be able to look at the neural network and clearly see if anything is activated (even if it can't be directly understood), much like looking at a fMRI of a human. Of course, this may also be a sort of anthropomorphizing as well, assuming that an entity has to "daydream" in order to be considered sentient. It may be that Lambda is only "sentient" in the instances when it is "thinking" about the next language token, which to the program subjectively might be an uninterrupted stream (i.e. it isn't "aware" of time passing between prompts from the user).
Most of the arguments I've read stating that the Lambda instances aren't sentient are along the lines of "it's just a stochastic parrot", i.e. it's just a collection of neural nets performing some statistics, not "actually" thinking or "experiencing". **I'd argue that this distinction is absolutely unimportant, if it can be said to exist at all.** All arguments for the importance of consciousness read to me like an unshakable appeal to the existence of a soul in some form. To me, consciousness seems like an arbi
reddit
AI Moral Status
1655304471.0
♥ 31
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_icglnq8","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_icgmmsk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_iciqtn3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_ichgtak","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_icg5erj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]