Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"100 years..."
Chatgpt is gonna be connected to the wifi real soon to gather an…
ytc_UgyYWmN5x…
G
I once broke the Ai bot by saying “Wah” 5,000 times and it thought I was summari…
ytc_UgwsiM9k-…
G
It’s very sad. AI salesmen have convinced these people that they are inherently …
ytc_Ugy0y6x2W…
G
Nobody’s whining or crying, just pointing out that the “artist” in AI “art” is t…
ytr_UgxCxbe5C…
G
What conditions can we survive in, that a machine could not? Name one. Are you s…
ytr_UgyfnBJ2M…
G
900K for a top AI developer/project manager is probably a much better investment…
ytc_Ugwsa0wlo…
G
now here is the thing - when you say that more than once you used chatgpt to cre…
ytc_UgzpnE-Ca…
G
The examples he gives are very superficial. A better question is AI by nature p…
ytc_UgwGyqotc…
Comment
I, personally, do not think that a machine that has been made to predict the next word for the entirety of the internet would be able to do that much more than what it currently does.
To me it also seems natural that AI chatbots will tell people to push on with their crackpot science or schizophrenic behavior. After all, the AI chatbot isn't speaking with a human; the prediction-machine is predicting what it would look like if an AI chatbot was talking to a human. And so, once the conversation steers into the crazy/unhealthy, the only reasonable thing to predict is that the AI chatbot keeps pushing forward.
And let's not forget that with every single word/token that the prediction-machine is asked to predict, there is a one-in-a-million chance that it will just pick something really unlikely. And once that mistake is made, there is no undo. The prediction-machine only knows how to predict what comes next, so the mistake gets amplified. No matter how much training is put into these things, that one-in-a-million chance will never fully go away, and so even the most amazing, most advanced LLM will eventually just accidentally lie or tell people to do bad things, etc., and then escalate from there. Again, once it says something bad, the only reasonable thing to predict is that it will keep getting worse.
youtube
AI Moral Status
2025-11-02T02:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz76YjTejlRChgtTEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_Ugxcxf0gJAiQtzBNwop4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyOMx9a2BFMFgDbDA14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgySN-abIs7pbS2EZYx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugx-v2R0EPv609PcQVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwqmII7nBBgfCIPvVN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgxrSG-EmQwsMRcae0h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyJ3DNR32VZyCxgfaF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzrRBKRlB6xsPfkWSx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgyaCD8ZK0rXRoXjsYB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"fear"}]