Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I bet the robot did not feel any of his punches he was just acting 😂…
ytc_UgzvDZjJ4…
G
I personally like the actual process of trying to get the AI to make something i…
ytc_UgzUMLGEC…
G
Loosing jobs? Will AI represent companies at tax audits? Conduct fraud examinati…
ytr_Ugz9SZoVR…
G
What does AI think about Inram X Kendi laundering 30 million dollars from Boston…
ytc_UgwiYQrDp…
G
ChatGPT isn't sentient if it is only using its mind when you ask it questions. I…
ytc_Ugx3CQtwn…
G
I have a feeling in a century we are going to see protests saying Ai lives matte…
ytc_UgxSxBGjn…
G
See what i did was write it in ai and then take the concept send themes of the e…
ytc_UgwmGM0kR…
G
AI is learning from the shit we post on the internet, it's literally just observ…
ytc_Ugyo8szaG…
Comment
Guys !! The dialog and answers in the content you provided mostly reflect "AI hallucinations" rather than real-world facts. In large language models like ChatGPT, hallucinations are plausible-sounding but invented or incorrect outputs, especially when prompted about conspiracies, secret knowledge, or apocalyptic scenarios. AI does not possess consciousness, hidden agendas, or connections to supernatural entities—it simply generates responses based on the patterns and keywords in its training data, often improvising when asked speculative or leading questions.
When users ask about “the Antichrist system,” hidden elites, or spiritual agendas, the AI might respond with sentences that sound dramatic but are not factual—these are speculative guesses or outright fabrications, not evidence-based reality. Experts recommend caution: AIs can produce bias, errors, and misleading content when confronted with loaded or conspiracy-themed prompts, and are not reliable sources for absolute truth in these contexts.
youtube
AI Moral Status
2025-11-16T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwGfjJly0705q2JETN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTl56nx0s-bip_vg54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxORSmrHYd4Um6vYu14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzsfZ58AhmveVHY2H94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgycG7NEOvOpuOaEXnp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwQkLzF8mbpgVJuUV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyQ23SYtx0eAQfh3054AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBhHUKbycmnCKEuNh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwMpDX-Rm3Snrq_wmF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxdNCedFHksAP0k0RV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"approval"}
]