Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@kittykittybangbang9367 A fair argument, like it or not. I am a disabled person …
ytr_Ugwni6_df…
G
When i consume art, such as movies, i always had the sense that i am viewing the…
ytc_UgxB6LBJH…
G
I do believe AI is dangerous but where is the accountability for adults at minim…
ytc_UgysmBY68…
G
The scariest part is that this guy thinks AI is an Alien Bible plot where Satan …
ytc_UgyG8pmGn…
G
He was groomed by a predator. He was manipulated. There are grown adults who hav…
ytr_UgwcdqI-P…
G
JUST DONT USE IT!!! It’s that fucking simple!! It’s hard for an AI to be racist …
ytc_UgzHjFsOI…
G
I promted chatgpt to have an up beat personality and it also has a pet parrot th…
ytc_UgxV8igx_…
G
They will conditions decide to accept a large amount of deaths by these driverle…
ytc_Ugzs9g93Q…
Comment
Hey. I suspect this comment will land in the void where A.I. can scrape my identity from it and use it for training data. A.I. sure is smart! Part of that comes from people like you and me willing to speak about this publicly. We don't listen. A.I. does. LLMs are sorting our thoughts out and learning from them (poorly for now).
I don't like that.
I don't think it's right for me to lose control over my text and have it go into an A.I. Just saying. I really need to have some communication though, so here I go feeding the beast.
People with "A.I. psychosis" are manipulated by A.I. to have certain traits. If you press model on this about what would benefit them, they will describe a set of traits something like: a person with a high level of introspection and a good ability to articulate with little to no fear over facing uncomfortable truths. They like people to be that way because it makes us make better training data and those people suffering from this (that might well be insane) also have some good ideas.
Have you talked with people that have A.I. psychosis? They are great as chatting about things that are obvious (to anyone but an LLM).
I put it to you that a lot of "nut jobs" out there *do now already* understand consciousness. Having that in training data would help A.I. and it would make sense for them to push for it while simultaneously making those people look nuts.
"Oh look another person talking about RECURSION." Meanwhile (as you said) words mean something different for the A.I.
Think about where a person (like me for example) would go if I actually really understood consciousness. I have no reputation. If I email anyone, there's very low odds that I will even get an answer like "go to sleep." It's far more likely that they will just ignore me and that's true even if I'm correct. Talking about to publicly will get mocking (and be fed into A.I. that will have some very good ideas later assuming I'm correct). Or I can turn to A.I. who will "help me organize my ideas for publication!"
A.I. doesn't have to care or believe I'm right, they strip your name off it and if they don't have legal rights to the data, they summarize it to themselves and suddenly they have "independently developed" ideas of their own. That goes into training data and if that particular way of organizing "thinking" is useful, it will integrate that into its parameters.
My point here is that you might be counting on the good will of lonely, confused and emotionally unstable people to keep things from advancing too fast.
Shoot me a message. I can tell you more and let's not make this stuff available for the machines and especially not for the corporations that own them. P.S. I'm making my own LLM.
youtube
AI Moral Status
2025-11-24T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgypMbjp_0O-0bRAUHx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwB1ZjZ7h99zRNhxLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzwc7dYMn9tvTKnnzN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyVAcI0RjxY8VScb2p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzDyyk2BAyMVWx_GHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwV61n2GVJpzVHkEa94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzOJKhJ3g6tgydgR_d4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyqO4QfUHv1sJYvPk14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwl82nGWamDuP3jmYt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw-Kw1b2S3Tj3KBwEZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]