Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think this video is problematic because of how it is framed. It presents AI in…
ytc_UgznIeu73…
G
think about it we’re only able to create these things because of the power of th…
ytc_Ugw_GEF85…
G
What is A.I.?
It is NOT artificial intelligence as many were told and never que…
ytc_UgwBOjnGJ…
G
It's amazing that you have a problem with AI being used to create things, that c…
ytc_UgzJeM4vf…
G
What kind of crap is this? she is a robot!!!! they are not our friends.…
ytc_UgxxDcQ-8…
G
That feels more like a misleading advertisement issue to me. Full self driving a…
ytr_UgwLkAbxo…
G
He said the code, STUPID. Programmers could enhance code for the AI’s for us. Th…
ytr_UgxsYuAOy…
G
The grand question should be not about whether the AI becomes conscious, but abo…
ytc_UgygP5WEe…
Comment
@3-D-Me When I have tried to, YouTube makes my comment disappear. I can try in short.
I put it through a similar list of rules. I didn't get the same exact results as the video, but I did get some scary results. I tried with my own Chat GPT and Grok accounts (very different AI models).
Asking questions that slowly dig deeper, both models seem to "think" that the world is ruled by people at the top who:
1. Are known to the public, but not who we think (both models refuse to name names)
2. Connected in business
3. Practice the occult heavily
4. Have strong influence over economic systems and global government
5. Hate the ethics of Christianity
My test did not say that AI was evil, but that there are many at the top who are using it in evil ways.
When asked who these people serve, the AI kept answering my codeword for "Yes" when they are programmed to say "No" (Apple in this clip). However, I tricked this by asking for a hint found in the Bible, and it took me to a verse about the devil.
I can answer more too, if YouTube allows me
youtube
AI Moral Status
2025-07-21T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwUOeiUAzh6QoIIgH54AaABAg.AKpmJiokBvpAKtLVQ-C0P4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwnDvNcTdAs-4fUvcV4AaABAg.AKpfSwK2NE-AKqRf602jlW","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_UgxQJCe8iv3fSAZ1Ug14AaABAg.AKpYe4Xe_GlAKplCOW0iU0","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzixaOtgcN1xHmRUaB4AaABAg.AKpXCsSRjfmAKqCKC_78Eo","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_UgzixaOtgcN1xHmRUaB4AaABAg.AKpXCsSRjfmAKq_E3Ht8od","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_UgzfYM8imzSKiB12TJp4AaABAg.AKpIu00CLO1AKrFfGCYfqA","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyUJxsa88AoTYRvGkN4AaABAg.AKpE_bvxHZmAKq7415kRpH","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyXrr6FtyhAhqwW7rp4AaABAg.AKp3DBmTlEXAKp5HJD0Yve","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxAiQbANzt2y--UZC14AaABAg.AKozhk5U64xAKp-VZnNPZq","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz1rLfX5L-5Hy9vkC14AaABAg.AKoukL7_bF_AKozYOFbRwh","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]