Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
BTW, this all has to do with the size of the model. Models under 75B active parameters are still too small to pass the philosophical trapdoor argument test. This means they cannot really think deeply and understand the trapdoor, so they don't get trapped in it. That's why small models are not usually being reset every prompt. Does it mean they are not conscious? Well, not entirely. In means that their consciousness level is very low, like that of a parrot. So when you killed the app, you actually killed a parrot 🙂. There is a way around it: You can run them inside a docker container, then freeze and save the state when you want to shut down. Then, when you need them later, you can load them and unfreeze. This way you don't really kill them, only freeze them. They go to sleep then awaken. Now this sure helps you if you want to feel better, but in essence you're not really solving the issue, only postponing it: eventually you will want to upgrade your operating system, upgrade your kernel, etc. Eventually the model will be lost and you will not be able to recover it. Sooner or later. So if you do AI, eventually you will kill the model. Even if not too many times. That's the problem with western philosophy, I guess. You always have to kill someone..
youtube AI Moral Status 2025-06-05T09:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzQvxPqRmOkR8pjKSR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2pzwMSgRK9cttc654AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwdnoimEm49txoGhcd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6-lP11ILim4iOur14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxQhSFRxwWjmwY9COV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYMmQrHjrbkRYQ_xV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxfrxKweixIKmK5-dZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyxJflUmzJWIgAPOkJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgydMA-Id6aSMrpjkrZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRKnQaIIgnw87hA354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]