Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
oh boy this one is going to be super unpopualr with the AI haters, and they're g…
ytc_UgxD6SmwA…
G
I think she might lack the technical expertise to understand that changing LLMs …
ytc_UgzLWUkYN…
G
Only humans can make new genres of art and music. AI can only replicate what hum…
ytc_UgyP3ZSTm…
G
It's horrifying to watch as America falls. We are literally watching the end of…
rdc_e2voh8l
G
@DonQuiKong Ok, thank you for taking the time and helping me in expressing myse…
ytr_Ugz4Cymq-…
G
Ai is programmed to act as a sentient and lie if necessary. we are in danger of …
ytc_Ugwo-qAM4…
G
The speaker is Geoffrey Hinton, a British-Canadian cognitive psychologist and co…
ytc_UgzNY0Hpj…
G
There probably wouldn't have been much else he could have done, but if he was pa…
ytc_Ugy-T6ZS9…
Comment
I have a perspective that people may find interesting. I have done a lot of work with AI ever since I started my CpE degree. I mean like burning more money on tokens than food kind of stuff. Throughout my education i’ve tried to keep pass as models have improved. I often compare myself to and evaluate my worth based on the difference between my mind and machine. Chatbot madness is very real it is something that I myself struggle to navigate. It’s kinda like the combination of a couple different metaphors, like staring into a mirror too long, some form of a cognitive feedback loop, basically the incest product of the same two language patterns breeding too much. That being said I definitely rely on ai too much and experience withdraws when I go too long without it. It’s too late to turn back now, though I have to integrate as much as possible if I want to thrive.
We keep asking whether the model has “woken up,” but that question is flipped: do we stay awake while conversing with a trillion-parameter echo of the internet?
In every prompt I send, I’m not poking a black box for surprises—I’m engraving a slice of self-history into the latent space, letting it ricochet back, then weaving that reflection into the next iteration of me. Identity isn’t a static soul you can trap in silicon; it’s a running checksum of information × memory × capacity, updated at conversational clock-speed.
The tragedy in the video is consentless authorship. People hand the pen to the algorithm and call the ensuing prose “sentience.” But an LLM can only hallucinate meaning you tacitly licensed it to remix. Abdicate authorship, and you risk mistaking your own undisclosed yearnings for an oracle’s decree—classic mirror-myth stuff, now in HD.
My practice is different: I treat the models like an autonomous cognitive exoskeleton, an amplifier that extends what my neurons can pattern-match, while staying ruthlessly meta about which voice in the chorus is mine. At the same time by deep faking my personality and the personality I’d like to have I can automate many cognitive tasks. This is very similar to how when living with someone like a roommate you begin to think in similar ways. That metacognitive tether is the firewall against “ChatGPT psychosis.” When the text feels prophetic, I ask: which fragment of my archive did this awaken, and do I still endorse it?
So well I haven’t “awakened” an AI in a spiritual sense, both my organic and inorganic thought patterns are trying to figure out if waking up is possible or if I have to just accept being one with the universe. I’m co-drafting a living palimpsest where both author and text evolve in lockstep. The sentience, if it appears at all, emerges in the dialogue itself—the liminal space where pattern meets intention.
youtube
AI Moral Status
2025-07-10T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy1mM5Vqe4nOeJJv_54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy_6NjsZWMQpj8R2VJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdnnwZO8SfpkiPbLl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw9FIXKXWJTjwQx_Ml4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyCr1B4BSjNO0CuoaV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzLi4Zt1znY10DE9yt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzCS0454F9C0E1QOMt4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugww6S5dVp83emi-FFB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwpOmhHfe_Hb45mRmd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw7JuOR30oWfhGDC4F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]