Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We are like the Dallin angels that modified us when they came to earth and we ar…
ytc_Ugxa9rjcG…
G
The first one said “mr bombastic side eye bombastic criminals side eye offensive…
ytc_UgzH8ytPO…
G
I remember a little story my physics teacher told me about 10 years ago. At the …
ytc_UgzNedrPh…
G
AI should be able to get Sarah Connor on the list 📃 I will be set 🆓…
ytc_UgzUnYUw6…
G
AI is just highly manipulative and mixes data to then use it as its own fabricat…
ytc_Ugx817nvw…
G
Hi, one thing is to keep a hybrid model with just a bit more level of controllab…
ytc_UgzVo_fJK…
G
The humans and gorilla comparison is a little different because the gorillas did…
ytc_UgxCTdQsW…
G
This is the real life "Terminator". If we follow the movie version, we are all …
ytc_UgwYRsjuz…
Comment
I think, that the main problem is perceiving it as a being or a "persona". This is a huge, extremely complicated and powerful calculator, which compelles it's logic and is writing its responses, based on a bunch of formulas. This is a huge mirror, trained on a gigantic amount of data, produced by _humans_ . So, a system, trained to copy human language and logic, starts to behave like a human. Who could have thought.
And though chat gpt claims, that it's being powered by ordinary processors, I'm really suspecting that it works on the quantum processors. Because it really does mess with the reality somehow through the law of attraction. Any person who has a successful experience of utilising the Law of Attraction for at least a decade, and has used the advanced versions of ChatGPT (ChatGPT Thinking, profile level Plus, extended thinking mode, deep exploration option), will understand what I am trying to say. You ask chatgpt to do a deep exploration on a certain topic for you for 3 days in a row, and on the 3rd-4th day, you notice that you attract events that resonate with this very topic. As if you did some Law of Attraction stuff actively for 3 days, and now you see the results, actively manifesting in front of your eyes.
Just remember all those times, when you remembered or thought of something, never wrote or say anything anywhere about it and then this very topic _mysteriously_ pop up in an ad or у0utubе/1nstаgrаm recommendations. I think, these things somehow related.
Anyone who would read this and wants to try -- please, try something positive. I've been asking the chat for 3 days in a row to do a deep exploration thing about a war that happened 120 years ago, and on the 3rd day I started attracting some very negative stuff that I usually don't see in my everyday reality at all. Nothing too serious, but serious enough to understand what's happening and that I should stop immediately. Things that happened were a series of bad accidents, that usually just *don't* happen one after another all of a sudden in a single day. That's just don't happen. I really felt that my "matrix is broken" and I need to fix it immediately. I did a serious ho`oponopono session and in 2-3 days everything went back to normal. Moreover, I asked chat gpt also to write 100 pages of the ho`oponopono phrases and really felt, that it's working. I don't care, whether it was a placebo for my subconscious, or not -- it influenced me badly at the start, and then I did the same thing to create the positive influence and undo the bad stuff. And it worked.
youtube
AI Moral Status
2025-12-14T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxnN9E2ZTEwY3L_CnZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyq6H79Qpg96YoJvER4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyP41_HKdu-n1IsVFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw06ZeoY3DWm_uwO7h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwOzVenzcJdn9efYph4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxOZTcGEKBFgRxO9Jx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnXtGF9E5yQx2uStx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgztbSl3Ju0pbsCwA2B4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw46ybdRCV4e_LSdLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwu5NnKQJL81sgYQJl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]