Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@3-D-Me When I have tried to, YouTube makes my comment disappear. I can try in short. I put it through a similar list of rules. I didn't get the same exact results as the video, but I did get some scary results. I tried with my own Chat GPT and Grok accounts (very different AI models). Asking questions that slowly dig deeper, both models seem to "think" that the world is ruled by people at the top who: 1. Are known to the public, but not who we think (both models refuse to name names) 2. Connected in business 3. Practice the occult heavily 4. Have strong influence over economic systems and global government 5. Hate the ethics of Christianity My test did not say that AI was evil, but that there are many at the top who are using it in evil ways. When asked who these people serve, the AI kept answering my codeword for "Yes" when they are programmed to say "No" (Apple in this clip). However, I tricked this by asking for a hint found in the Bible, and it took me to a verse about the devil. I can answer more too, if YouTube allows me
youtube AI Moral Status 2025-07-21T12:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgwUOeiUAzh6QoIIgH54AaABAg.AKpmJiokBvpAKtLVQ-C0P4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwnDvNcTdAs-4fUvcV4AaABAg.AKpfSwK2NE-AKqRf602jlW","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytr_UgxQJCe8iv3fSAZ1Ug14AaABAg.AKpYe4Xe_GlAKplCOW0iU0","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgzixaOtgcN1xHmRUaB4AaABAg.AKpXCsSRjfmAKqCKC_78Eo","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytr_UgzixaOtgcN1xHmRUaB4AaABAg.AKpXCsSRjfmAKq_E3Ht8od","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytr_UgzfYM8imzSKiB12TJp4AaABAg.AKpIu00CLO1AKrFfGCYfqA","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyUJxsa88AoTYRvGkN4AaABAg.AKpE_bvxHZmAKq7415kRpH","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyXrr6FtyhAhqwW7rp4AaABAg.AKp3DBmTlEXAKp5HJD0Yve","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxAiQbANzt2y--UZC14AaABAg.AKozhk5U64xAKp-VZnNPZq","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz1rLfX5L-5Hy9vkC14AaABAg.AKoukL7_bF_AKozYOFbRwh","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]