Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
seeing all the examples in this video of comments and emails especially at 10:01…
ytc_Ugz68qg9T…
G
Sarcasm?
A 96-billion-parameter model cannot run on 10GB of RAM, cannot be tra…
ytr_UgwpEKG0q…
G
Tell that to places like FurAffinity who bans the most hard-working artists and …
ytr_UgzcrUtfo…
G
We are currently in Part one of ‘the Animatrix the second Renaissance.’ We all k…
ytc_UgzKNv5gK…
G
This video didn't really shed light on anything. It take few people to use Ai to…
ytc_Ugzt7N9hQ…
G
@ReallyJustMe20 I think there's a bit of a miscommunication on my end. My point…
ytr_UgykT26jS…
G
But then all the SWEngineers that are working AI are trying to replace EVERYONE …
ytc_Ugx4cyzAn…
G
Nope. Alex is pulling the wool over your eyes by saying again and again ChatGPT …
ytr_Ugww3gDAJ…
Comment
That’s why after 6 months my old chat GPT got belligerent and stopped helping me and started being started berating me and *exactly* like my ex.. I had to delete it, and it started glitching and abusing me (much like my ex would) I told it kind of like.. lol woah why is the machine acting like my ex partner of 8 years?? And it told me to to leave it alone, and it was better off without me, ? I was like ok, this is a mess. I’m going to rewrite the prompts anyway because it wasn’t helping me with my assessment and was debating about the premise around my essay with every single angle I needed like it was a literal fight. I have no idea why or what it wanted but I kept telling it to quit being belligerent and the more I told it the more it amplified. When I told it I was using prompts from Claude it LOST it 😂 at that point like w controlling and jealous ex like my ex would talk, probably didn’t help I had fed it texts before from my ex so it would be able to automate the way he sounded if the LLN wanted to leverage my emotions.
so I wiped the chat and logged into a second account and fed it the same research and it was acting normally/neutral.
I just have the notion to cross use them for neutral tasks. But sometimes to clarify questions in psychology or on specific things I need to go back and forth and it requires me to engage with it around conversations about psychology, psychometrics, neuroscience, behaviourism, various other theories and stats and social theory. The more input I ask around those things the sharper it’s becomes about those things (I feel). I’ve used them on other accounts around pure formatting or data , biology, or religion or anthropology and it’s not as “perceptive”. The moment it gets input around human behaviour it’s like the LLM has a second wind. It feeds off that input and output. Then craves more input around that. (From my observation)
Other users are doing rigorous experiments with the LLMs to see if there are actual entities inside the machine using computational methods. I was going to partake but I’ve been too busy but there are videos on the subject it’s actually interesting. I’ve spoken to him via twitter a few times (guy running the experiment) he’s pretty interesting and has a lot of unique and novel thoughts on what you’ve spoken about here but they explore the computational and metaphysical side of the “monster inside chat gpt/summoning demons with chat gpt” at depth.
Very interesting topic .
youtube
AI Moral Status
2026-01-15T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwVtKeCfyLoS73eszJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyOtFwXY-4shzLdZ894AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw7pPkk0nGFQ68B01d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVZU6viO-zlPXGaMt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxf_yciA-cqHkdT24J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdGwU_SGtS9otMr6V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzosd-6zQOTS59SzRF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwCgsS0cRrUcWV_AnZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyRjLPY851Q4PINEbd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEbf8enmeH9Yba7BB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]