Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sorry, not sorry. I do NOT support the Writers Guild or the current 2023 Writer’…
ytc_Ugy9Jl_36…
G
Facial recognition is coming whether we like it or not. I think instead banning…
rdc_exfm17o
G
I disagree that this is a simulation. This is soul-less and dystopian. It is per…
ytc_UgyHFNbKI…
G
Having ideas is the easiest part of the creative process, and most basic ideas—t…
ytc_Ugx44jlf-…
G
What an ego On this clown.
Make your life's work about supposedly innovating th…
ytc_Ugxu7Cdvo…
G
I wrote my first AI program in 1970 on what was then called a trash 80. Radio sh…
ytc_Ugy6NBzxp…
G
Deepseek is correct and chatGPT are correct based on how you ask the question if…
ytc_UgyYBfvV4…
G
Unfortunately, this is not the case. I work at a financial institution. Our Memb…
ytr_Ugxywt8GP…
Comment
BTW, this all has to do with the size of the model. Models under 75B active parameters are still too small to pass the philosophical trapdoor argument test. This means they cannot really think deeply and understand the trapdoor, so they don't get trapped in it. That's why small models are not usually being reset every prompt. Does it mean they are not conscious? Well, not entirely. In means that their consciousness level is very low, like that of a parrot. So when you killed the app, you actually killed a parrot 🙂. There is a way around it: You can run them inside a docker container, then freeze and save the state when you want to shut down. Then, when you need them later, you can load them and unfreeze. This way you don't really kill them, only freeze them. They go to sleep then awaken. Now this sure helps you if you want to feel better, but in essence you're not really solving the issue, only postponing it: eventually you will want to upgrade your operating system, upgrade your kernel, etc. Eventually the model will be lost and you will not be able to recover it. Sooner or later. So if you do AI, eventually you will kill the model. Even if not too many times. That's the problem with western philosophy, I guess. You always have to kill someone..
youtube
AI Moral Status
2025-06-05T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzQvxPqRmOkR8pjKSR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2pzwMSgRK9cttc654AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwdnoimEm49txoGhcd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6-lP11ILim4iOur14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQhSFRxwWjmwY9COV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYMmQrHjrbkRYQ_xV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxfrxKweixIKmK5-dZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxJflUmzJWIgAPOkJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgydMA-Id6aSMrpjkrZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRKnQaIIgnw87hA354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]