Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@zyebormI think you clearly misunderstood my premise. AI can very much become sentient - at least as of current understanding we have no indication it could not. I am just saying those transformer models are not capable of doing that. My reasoning for this I explained in this comment is that it’s next word prediction, forward only. I mentioned the ‚reasoning‘ models because they try to overcome that, but they really don’t. There are other issues: the model cannot differentiate between something it said itself and something the user said. Which is why even reasoning models often are incapable of finding back to the right argument after they got a wrong idea. This is why the longer the context, the worse the output gets. There is also something else that is required for sentience: the ability to learn. Transformer models learn during training, but they cannot learn during inference. They treat the entire text as input for the next token, they are not able to distill or generalize while you are querying the model. If something doesn’t fit into the context window, you are out of luck. As soon as models can A) learn during inference and commit part of the earlier conversation into their model and ‚forget‘ earlier tokens from the context window - hence being able to improve by talking to someone and B) are able to build a model of their response and check it for logic issues before answering, then we can start thinking about consciousness again. Current LLM fail at both, hence no sentience possible.
youtube AI Moral Status 2025-07-09T17:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgwKKu3A-S19lshkH_R4AaABAg.AKMapO-PJMIAKMlhwYqUqC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwxrUq2ysqKHnFOlNp4AaABAg.AKMaZZjcyjrAKMuWtCZjmM","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwWk14uFrH-gmrTB414AaABAg.AKMaOJ0ZNIZAKMjvCVXzLV","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzLn1T_sgmZZZ6vFoF4AaABAg.AKMaDfWaWHXAKMckSFyKSx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgyQOR3MCDE6LSXenX14AaABAg.AKMa5YvaWg5AKMgdoC5d2s","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwnKxStmHYi256aull4AaABAg.AKM_fxMxHg1AKMaYbzSnYJ","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwnKxStmHYi256aull4AaABAg.AKM_fxMxHg1AKMfRRp_LQz","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwnKxStmHYi256aull4AaABAg.AKM_fxMxHg1AKMi24epIcp","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwnKxStmHYi256aull4AaABAg.AKM_fxMxHg1AKMlsN94hJN","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwnKxStmHYi256aull4AaABAg.AKM_fxMxHg1AKMo2FcJ4IT","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]