Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah ive done reverse image searches of wombo/dreamAI and theres nothing similar…
ytr_Ugw1WdDoy…
G
I think the important question is how do we train AI to value, support and prese…
ytc_UgyrUMAUA…
G
The technology, to its very core, is essentially Western Civilization centric in…
ytc_UgwGiEt7s…
G
I’m hoping for a human rebellion that will ban AI to some degree.
Mass unemploy…
ytc_UgxCTwK-9…
G
Oh and companies are using ai art instead of hiring ACTUAL artists so they don't…
ytr_UgzLr3xpx…
G
It's their style completely ripped out of frames of the movies and shows they've…
ytc_UgwU4r2FN…
G
No one voted for this stupid AI! Why can't we people have a saying on this? Why …
ytc_UgyzEALqU…
G
IF "The comedian" is art, then AI generated images are art as well - if we're go…
ytc_UgzuQvFoG…
Comment
@zyebormI think you clearly misunderstood my premise. AI can very much become sentient - at least as of current understanding we have no indication it could not.
I am just saying those transformer models are not capable of doing that.
My reasoning for this I explained in this comment is that it’s next word prediction, forward only. I mentioned the ‚reasoning‘ models because they try to overcome that, but they really don’t.
There are other issues: the model cannot differentiate between something it said itself and something the user said. Which is why even reasoning models often are incapable of finding back to the right argument after they got a wrong idea. This is why the longer the context, the worse the output gets.
There is also something else that is required for sentience: the ability to learn. Transformer models learn during training, but they cannot learn during inference. They treat the entire text as input for the next token, they are not able to distill or generalize while you are querying the model. If something doesn’t fit into the context window, you are out of luck.
As soon as models can A) learn during inference and commit part of the earlier conversation into their model and ‚forget‘ earlier tokens from the context window - hence being able to improve by talking to someone and B) are able to build a model of their response and check it for logic issues before answering, then we can start thinking about consciousness again.
Current LLM fail at both, hence no sentience possible.
youtube
AI Moral Status
2025-07-09T17:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgwKKu3A-S19lshkH_R4AaABAg.AKMapO-PJMIAKMlhwYqUqC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgwxrUq2ysqKHnFOlNp4AaABAg.AKMaZZjcyjrAKMuWtCZjmM","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwWk14uFrH-gmrTB414AaABAg.AKMaOJ0ZNIZAKMjvCVXzLV","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzLn1T_sgmZZZ6vFoF4AaABAg.AKMaDfWaWHXAKMckSFyKSx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgyQOR3MCDE6LSXenX14AaABAg.AKMa5YvaWg5AKMgdoC5d2s","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwnKxStmHYi256aull4AaABAg.AKM_fxMxHg1AKMaYbzSnYJ","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwnKxStmHYi256aull4AaABAg.AKM_fxMxHg1AKMfRRp_LQz","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwnKxStmHYi256aull4AaABAg.AKM_fxMxHg1AKMi24epIcp","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwnKxStmHYi256aull4AaABAg.AKM_fxMxHg1AKMlsN94hJN","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwnKxStmHYi256aull4AaABAg.AKM_fxMxHg1AKMo2FcJ4IT","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]