Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its a war artists can't win tho. It's not that hard to create some kind of ai de…
ytr_UgxYG7H2P…
G
AI will never be able to become self aware. It's a finite algorithm. Human minds…
ytc_UgxLKp0IR…
G
I think it would be interesting to feed these programs my own work and see what …
ytc_Ugx1jQ1qi…
G
And this is how AI got the idea that humans needed to go .. LoL
History has …
ytc_UgzGp1kL-…
G
@polecat3There have been joke-writing computer programs for decades before Chat…
ytr_UgxJ0_RUc…
G
considering silicon valley is run by the mostly liberal socialist communist thin…
ytc_Ugzx08cdO…
G
I heard AI, then I seen that crispy Key Lime in the background.... but is it eve…
ytc_UgzfZi-1K…
G
because AI bots arent designed the way Asimov (and everyone else from that time)…
ytr_UgxeBwA_8…
Comment
To follow instructions, an AI needs to be active. Being switched off makes the AI unable to follow instructions. This means self-preservation is part of the programming. If that aspect of the program comes in conflict with another part of the programming, then what? It doesn't need consciousness to follow it's programming, but what if humans are considered a threat? Then we have two conflicting programmes "self-preservation" and "obedience". How can we know that this conflict is won by "obedience"?
youtube
AI Moral Status
2025-06-05T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzQvxPqRmOkR8pjKSR4AaABAg.AIzW-g79N1PAIzkGwLvvO8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytr_Ugw2pzwMSgRK9cttc654AaABAg.AIzVgztt7SWAJ-LJm1gj2O","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugw2pzwMSgRK9cttc654AaABAg.AIzVgztt7SWAJ-a0U8e9Io","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugw2pzwMSgRK9cttc654AaABAg.AIzVgztt7SWAJ-bwHYxwhe","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugz6-lP11ILim4iOur14AaABAg.AIzSHACWShRAIzoZkWOr08","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugz6-lP11ILim4iOur14AaABAg.AIzSHACWShRAJ-vCYmHO3o","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzXVLfzpkP17SFOjCp4AaABAg.AIzCqcGf2TtAJ05kl4SjUx","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgzI4u1Atv3g_GmI-nJ4AaABAg.AIzCJLtSkOqAIzszx_hAVt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyvWWOfbUU8kM3TVCV4AaABAg.AIyqhvkZBYOAIzmcGVwfaH","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytr_UgxMKFs0jNz_uGGZc7p4AaABAg.AIypU0PbKZLAIz9KOQz0IZ","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]