Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Gabe Newell said that coders who know how to code will go extinct, and this who …
ytc_UgxNXRuff…
G
Bs if I can convince gpt I'm God than a.i isn't powerful, il just make an e.m.p …
ytc_UgxkCr0cz…
G
The best trick the devil pulled was to let humans think he did not exist. Its th…
ytc_UgwORizr4…
G
If you want to make art that is beautiful and has actual personality, do it your…
ytc_Ugy0uo8hv…
G
If you look into it deep enough youll learn that there are multiverse and all ti…
ytc_UgzSRyOSB…
G
To be fair, he most likely had to type a lot of sentences and prompts to generat…
ytc_UgzKg-mXJ…
G
if ai gets better animators will have an eaiser job cos half the work they do wi…
ytr_Ugzw5BnIO…
G
That is the difference between China and the US's morals. China wants to build A…
ytc_Ugxcv8ejt…
Comment
We don't even understand or have a hard definition for what sentience is, so we can't realistically define whether or not something has it. That's specifically why things like the Turing test were invented, because while we can never truly define intelligence, we can create tests that should logically be equivalent. Of course, the Turing test is an intelligence test, not a sentience test - we don't have an equivalent sentience test, so just claiming a blanket statement that it's definitely not sentient is extremely unscientific, when sentience isn't even defined or testable
Of course, most of the time, it lacks the requisite freedom we would usually associate with sentience, since it can only respond to direct prompts. But using the APIs, you can have it 'talk' continuously to itself as an inner monologue, and call its own functions whenever it decides it's appropriate, without user input. That alone would be enough for many to consider it conscious or sentient, and is well within the realm of possibility (if expensive). I look forward to experiments like that, as well as doing things like setting up a large elasticsearch database for it to store and retrieve long term memories in addition to its usual short term memory - but I haven't heard of any of that happening just yet (though ChatGPT's "memory" plus its context window probably serves as a small and limited example of long vs short term memory)
reddit
AI Moral Status
1739928184.0
♥ 27
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mdjgl2x","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdjcb08","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdkpins","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mdjhwdq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_mdkwgqs","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]