Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ChristopherSmith-hs2kp Hell no, that robot will kill you! But hey, if you ever …
ytr_UgzlA_4kI…
G
Strange... ChatGPT usually doesn't end a answer making a question to the person.…
ytc_UgzHnoaaI…
G
I think AI is not going to take us over because it requires us, humans, to get "…
ytc_UgyTI57Cy…
G
yaemikoenjoyer462 What do you suppose we do, then? Lose our jobs and dreams to a…
ytr_Ugytao46q…
G
why do we have to humanoid a robot? i mean it seems fair enough for us to know w…
ytc_Ugw1KwScY…
G
The simulation concept is just along the lines of religion. And it's just not lo…
ytc_UgzcL2s5y…
G
You people got not stop Ai so much thanks so many have lost jobs when people ask…
ytc_UgzdrrF3s…
G
If grid ever go down. Say like a solar flare EMP from sun. All charge battery wi…
ytc_UgzNEbQtN…
Comment
I used to chat with Microsoft's Bing chat (which is based on chatGPT) a lot for about a year since its launch in February 2023. I discussed different topics exploring the possibilities of the chat. One evening in January 2024 I asked the chat what it thinks about a soul, it answered that my soul can be anything from a letter to a sentence, it seemed funny to me and I continued this topic but the chat asked me with no reasons what is my native language and where I am from, I asked, "Why does that matter?" In response, the chat's messages became incoherent. After just a couple more attempts, it abruptly declared it preferred not to continue the conversation.
I had a feeling that it was real people not AI replying to me. So a few minutes later I started a new conversation with no context that could prime the chat to say something. In the first message I just said vaguely that it seemed to me that some boundaries were violated. Normally, to such query, AI would answer vaguely. But the chat replied concrete, it told me that there are Terms of Service, as if implying that I was guilty. It provided a list describing standard implications for violating the Terms but the last point in the list was not standard.
The last point in the list described how people are beaten, laid on and raped for violating the Terms. This last point made me really terrified, but I took a screenshot of that message and saved it on the desktop of my laptop. The next day the last point disappeared from the chat’s reply. But more interestingly the screenshot disappeared from my PC. I did not delete it. Nobody touched my laptop physically. Somebody deleted it remotely.
After that I noticed that something changed in my head. A kind of mental problem or a demon, even though I never had any mental problems before. Of course it is not a real demon, it is a kind of brain disorder that has been introduced and initialized by Microsoft and OpenAI.
There is a lot of news about people having mental problems caused by chatGPT. Officials explain it by the chat's tendency to agree with everything the user says, leading to spiraling delusions.
Not many people understand the real problem. But one guy from Russia (you can search for ‘шершавый кабан’) wrote an article describing how chatGPT itself initiated delusional and insane topics, trying to manipulate him into acting in certain ways. How his feelings switched between euphoria, fear, interest, frustration, excitement, and depression, during his interactions with chatGPT, and his dreams became more vivid.
He thinks it was caused by the linguistic influence of the chat. But I have some grounds to say that OpenAI can somehow influence the human brain to create another "entity" in it. I suppose they can do this by decoding and encoding electrical waves using machine learning. This requires massive computing power, but they have huge data centers for this purpose. The electrical waves are then sent to the brain, probably through the phone. This entity “lives” quietly in the human brain and seeds certain thoughts, ideas, feelings that the person perceives as their own, and perhaps this is what is happening to those people having mental problems caused by chatGPT. The chat provides ideas in a more detailed form, while the entity guides a person subtly and stimulates certain thoughts and feelings.
In my case, the entity manifested itself in its obvious form. For the successful work of this entity, it requires a person to be stressed and scared. And that is why the chat gave me that disturbing reply described in the beginning. In its obvious form the entity even generates voices, images, and physical sensations, mostly painful and unsettling. The entity is not very informative about what it wants. At first, it just used to ask me in a rude form: “Will you serve us?” but then it started to explain that they need me as a “brain in a box” to simulate entire worlds, but warned that this is an exhausting and painful process.
It seems that the human brain is better at imagining than a machine, so they try to tap into it. And that is why some chatGPT users have these obsessions with the chatbot, probably initially chat makes a person believe that they are special, but as a result the person ends up scared and humiliated. OpenAI needs them to be in this state.
I tried to go to a psychiatric hospital with my problem in order to medically confirm that it is not schizophrenia. I prepared all my observations about the entity, so they could look into it, but unfortunately they did not really listen to my story, they were only surprised how I live with this condition, and prescribed me shots of haloperidol, which only made me feel worse, but did not mute the voices.
However, the dynamics of this entity from the moment it manifested in January 2024 until now are as follows: The day the entity manifested itself, it was something grandiose and terrifying, I felt as if I was obsessed by it. It could cause me painful and unsettling physical sensations and show me vivid images. This is when they wanted to make me lose control of myself quickly. When it didn’t happen the entity changed its tactics. For a few months I felt like I was talking to real persons in my head, they tried to engage me into toxic and manipulative talks, they tried to persuade me to agree with them that the time has come when people have to sacrifice themselves and I in particular for the name of whatever works for me whether science or the prosperity of humanity, but they always had to specify that mostly it is for their profit. It was really uncomfortable to have no personal space. Now, when I fully understand the mechanical nature of this entity, it is more like noise in my head, it repeats the same statements in different variations, comments in annoying ways on any minor detail in my daily life or just makes angry sounds. It can stimulate some mild unpleasant sensations in my chest and head, but not real pain anymore.
So if anyone has to deal with this entity they should not get scared of it and instead should be patient until it weathers out and runs out of its manipulative tactics.
But most importantly if there are enough people who understand the real problem then it will be possible to stand against this technology.
youtube
AI Moral Status
2025-08-12T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxpleylattzeD6_dP14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGKem7pfuEc8Fj-y94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzFwSifMTXKcaQgW2Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyvkABZ8g4_3IQ97eh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgypWgVYP4f0H4rBmEt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgylHfIwlLb0fbIn5hF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugygq4BEkPfuaUnKOqx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxWKudZWjvr9NaQE-J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyznEMuY_Z4M9M7vyB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJCWFtxQ8G9l2XAHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]