Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Oh man ChatGPT is a pain to work with, it cannot even do the same task over and over again which every child can do. The answers degrade very fast and it is unable to spot it's mistake. So you have to open a new prompt, it works some time and degrades quickly again. There is zero intelligence involved. Even if I told it that it should just continue or work as it did in the first answer it can't. It does not even understands the problem. Even a primate or a crow would be able to just repeat the same task over and over again if trained properly. ChatGPT can't however. After some time I always have to start over with the same prompt again to make it work properly. Even if I clearly told it that it should just do what the prompt says. It's so easy, but even that is impossible for it. It can do it for maybe 5 answers but then just alters the answers so they become unusable. Is is also impossible to teach it, it can't recognize it's mistake. Even if letting it compare the first answer and the broken one it leads it to see the difference it doesn't know how to replicate the first answer however. I have to start over again with the same prompt. It's impossible to make it remember the prompt/task. And ChatGPT itself told me it can't remember and that is the reason why, there is no intelligence without memory, everything is just probability and no reasoning at all. AI understands nothing. Nothing at all.
youtube AI Moral Status 2025-07-09T15:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzOIpp2sKlDeHn-Ebd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYHYS_QcUt0pkSrOd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5la4OkSho2lDmC214AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz-34myA7WtOFcoEsF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgywBImaceaSCAFs6_R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw05vfmiKr1NYrVS1h4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5iP5ZGJvY69HxsIV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy3n8oPJG2K0d6QjxJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw8XK306A3TY3KDdqp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwjYPrLkm3-F5z-aFd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}]