Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Programmers are getting scared and they should. AI might not take over there job…
ytc_UgwQlFfXI…
G
Me in:
2022: wow chatgpt is so cool
2023: image generation, wild
2024: video gen…
ytc_UgxqM-Hcc…
G
I dont really care for it either way, but that image of the city by the ocean is…
ytc_UgzfeISqJ…
G
It’s not like working on a calculator. AI is really dangerous—companies will hir…
ytc_UgzR_MvUg…
G
This was the standard "it's depressing to hear Bernie talk, because he's right, …
ytc_UgxTMlR9G…
G
been taking ubers for almost a decade and taking taxis my whole life, experience…
ytc_Ugx0IN8Eq…
G
Life is about gift, and because of its finite time is what gives it such value. …
ytc_Ugx2PQqHh…
G
Basically: There's nothing to care about
We, humans, care about other humans or …
ytc_Ugw2b5Twf…
Comment
> It did appear to have a sense of self, be aware of the passage of time and recall memories. Whether these are just the outputs of a clever text predictor or evidence of sentience is not something I feel qualified to speculate on but at least the superficial appearance is there.
OK, but in the case of LaMDA we have something that we don't have for humans, namely a complete reductionistic understanding of how it's implemented. That doesn't mean we understand everything it does -- but it does allow us to put certain very strong *constraints* on what it *might* be doing.
In particular, assuming LaMDA is structurally similar to other "transformer"-based language models (and I haven't seen any claims to the contrary), its output is a mathematically [pure function](https://en.wikipedia.org/wiki/Pure_function) of the input text you give it (plus maybe the state of a random number generator). We know it has no memory because its design does not incorporate any mechanism for it to store information at one point in time and retrieve it at a later point in time.
Any time you see these back-and-forth conversations with a text-generating neural network, they're invariably being "faked" in a certain sense. When LaMDA says something, and the human asks a follow-up question, an *entirely new* instance of the network with the *same* initial state is being run to answer that question. The reason it appears to be able to carry on a coherent dialogue is because each instance is *prompted* with the complete transcript of everything its "predecessors" said in the current discussion. Even if a single instance of LaMDA could be said to have an internal "thought", its subsequent behavior in the same conversation can not be influenced in any way by that thought.
It's not just that LaMDA has no long-term memory of facts. It's structurally impossible for it to have future "mental states" that depend directly on its past "mental states". This is not a matter about which we need to specu
reddit
AI Moral Status
1655321740.0
♥ 22
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ich211h","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_ichruie","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_icj3zi9","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_icfy1dl","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_ichezni","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]