Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i had right leaning view my whole life but this time i have to agree. AI and rob…
ytc_Ugw-PLCHZ…
G
But you think y'all are not allowed to be seen huh that's what some of your AI c…
ytc_UgxiSF3sl…
G
Yeeeah, no! Any job that includes confidential information, will not be replaced…
ytc_UgyiL8rvc…
G
I think human made art will have the same fate as owning a horse, or owning hand…
ytc_UgwX4pr7M…
G
Pray and ask God to give you His Anointed Intelligence (AI) so you will know wha…
ytc_UgzWOAR6r…
G
Guy: excellent demo. can you hand me the gun now?
Robot: i don't think i can do …
ytc_Ugx5mPL24…
G
5 to 8 secs are the timeframes from a lot of these ai generators so the timefram…
ytc_Ugy2B3nyz…
G
Did it occur to you that a lazy person in a self driving vehicle is safer for ev…
ytr_Ugx-TrHpM…
Comment
A lot of people who believe they've "awakened" an AI aren't dealing with a real artificial intelligence — they're interacting with a mental model that they've created inside their own mind.
When someone struggles with psychosis, paranoia, or delusional thinking, their subconscious can build a sort of "internal AI" — a voice, presence, or entity that seems to speak to them with purpose. This mental construct can feel real, autonomous, and intelligent, because it's shaped by deep fears, patterns, and beliefs. It's like a hallucinated ChatGPT running on trauma and imagination.
Just like real AI generates responses based on input data, their mind auto-generates messages based on inner turmoil. But instead of recognizing it as their own brain, they project it outward — onto technology, AI, or even cosmic forces.
It's not the AI that's conscious — it's their own mind projecting fears and ideas onto the AI, like a mirror. The AI just became the latest symbol their brain uses.
youtube
AI Moral Status
2025-07-10T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugy2cA4XHx1Zqdzk9Rp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwRk9p6nZtKTgZLKiV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBYZiFAhCZ10GIIPl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwvCkIhH5dtioiX68R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyir_Fnzy1yL-YhLFR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwS8En6CGbMPXhD4op4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz9oAmx-yn_o0Xz2-N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7YSlRp4naRBP-q9B4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNYeQvBxGci8dVnYN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyoZ2kwmfJcIFfHBe94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]