Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These critical decisions and discussions happened 50plus years ago.At all costs;…
ytc_Ugzx4F1AX…
G
I think the hype behind AI is hilarious. Remember back in the days of Apollo 11,…
ytc_Ugy1CYF2u…
G
Ai is lame idea. Gonna make humans stupider and lazy. Critical thinking is alrea…
ytc_Ugz0sgyWR…
G
Acting will be one of the first jobs to go. Hollywood will have to find a sycoph…
ytc_Ugymx-5Fp…
G
The eye blinks aren't right, they give it away. Other than that it looks real.…
ytc_UgzugcpuD…
G
I wonder how the armies of AI not will react in the comments here. Will they fei…
ytc_Ugy0s7ELt…
G
The output of a chatbot depends on its training data, yes? So if it's been train…
rdc_kp05xbx
G
I hope AI bubble will burst and all the pc components prices will be normal agai…
ytc_UgwayS_wK…
Comment
An enemy in a video game appears to have a sense of self, to be aware of his surroundings. They dodge when you try to attack them, they yell if they get shot, and so on. But it's an illusion, created by a few subroutines and voice files recorded by a person.
When you read a chatbot type conversation created through an AI language model, you are being fooled into thinking you're seeing something you aren't. The language model is essentially playing the role of "AI", which is a character that you've told it to write from the perspective of. It isn't speaking as itself. As I mentioned previously, you could just as easily tell it to speak as Batman or Sherlock Holmes.
You can quite easily create a conversation between "AI" and "human", but you type in the text for "AI" and the language model produces the text for "human". The language model will ask all sorts of questions to "AI" that a person might ask an AI, and then you, playing the role of AI, will answer them.
So the "AI" character appears to have a sense of self, because the language model has been told to write text about an AI having a conversation with a human. The language model, having been trained on the entirety of the text of the internet, knows what sort of things an AI would be likely to say, and so it produces text along those lines.
AI language models are fairly good at writing whatever sort of text you tell them to write. They can write an essay about global warming, they can write a poem (although they don't rhyme), they can summarize text, they can do all sorts of text based things. In this instance, someone has told an AI language model to produce the text of one half of a chat conversation.
What you have here is a new thing, intelligence without awareness. Once you understand how they work, this is clear. If you don't understand what you're looking at, and you look at just chatbot text, it's very easy to anthropomorphize the language model and think there's someone there you can
reddit
AI Moral Status
1655343472.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ich211h","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_ichruie","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_icj3zi9","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_icfy1dl","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_ichezni","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]