Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I draw but for fun i went to chat gpt and had it produce ai art prompts. So you …
ytc_UgxYZtKbV…
G
Actual question I need answered for a certain plan of mine is it illegal to feed…
ytc_UgzOWBPs6…
G
Remember the AI who was working so much at a monotonous task that it eventually …
ytc_UgxQEvdLI…
G
The knowledge gap: people that really understand AI have just dangerous knowledg…
ytc_UgxcKPdNK…
G
You didn't even appreciate Ai how we can expect you to appreciate a human. Typic…
ytc_UgynCByuD…
G
That's bs. If you have any conversations with chat gtp4 you'll know they are as …
ytc_UgxVSejVf…
G
Nice try, you should be a bit more accurate promoting "you guilty to global chan…
ytc_UgzIUdV-U…
G
People "freed up" don't have disposable income to spend on tech. So how are thes…
ytc_UgyJy7KLq…
Comment
This is a false equivalency fallacy.
Yes, human brains are computable. And yes, one day we might develop an AI system with the potential for consciousness.
However, LLMs will never be that system. They can't be. The problem itself is far too simplistic to optimize, it doesn't provide the complexity needed to develop reasoning or logical deduction and in fact those abilities would be detrimental to token prediction accuracy. There's also no mechanism in an LLM which could train modeling of the world around it, or even comprehension of the symbols it outputs.
We also importantly don't allow them to learn in real time, so they currently have no mechanism in which (even if they could make predictions to test) they could test predictions and learn from the results (even if they could comprehend the results). We instead chunk out the training entirely, just feeding it text data and correcting weights so that it's output matches the data we fed in as closely as possible.
There's so much missing from LLMs compared to something that has the potential to reason. What isn't missing and convolutes the results is that the output is human readable language which allows the humans who read it to easily fool themselves by projecting their desires on the output.
The way LLMs are designed factually does place a hard limit on the functions it approximates. Using these same models, you could make a network thousands of times bigger than humanity will ever be able to achieve, train it on all the symbols humanity will ever generate, and it still will never achieve sentience.
You MUST create a completely different model if you ever want to see a sentient AI.
youtube
AI Moral Status
2025-07-09T15:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzyiNhUzcNJA7j93Pp4AaABAg.AKM_3MTVLcLAKMg4R_9CV5","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzyiNhUzcNJA7j93Pp4AaABAg.AKM_3MTVLcLAKMm2qSvx4O","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_UgySjha1pNoOdBmCIIV4AaABAg.AKMZjmut2E7AKMcrSCPixL","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgySjha1pNoOdBmCIIV4AaABAg.AKMZjmut2E7AKMhEpZwUFG","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugz7ciIV90QP-y79ned4AaABAg.AKMZS958uN3AKN0B1D8Y-I","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzTf8OaVWjrhbvJpHR4AaABAg.AKLa4i7Vac8AKMeIuOfSGS","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyWwK8lNXJEt3Pl-1h4AaABAg.AKKrfxka-MkAQcyQrMP5ec","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgzDHTnDY08FLwUvM1x4AaABAg.AKKnpF8V3C4AKKyxAdIZlc","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgzDHTnDY08FLwUvM1x4AaABAg.AKKnpF8V3C4AKMaeADVdhz","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzDHTnDY08FLwUvM1x4AaABAg.AKKnpF8V3C4AKMbfcYVNYZ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]