Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So, I think theres some nuance in the convo about nightshade you've missed becau…
ytc_UgxpEBiGI…
G
He said one thing that I can't stop thinking about. He said he never knew anyone…
ytc_UgyUAZPnK…
G
I make ChatGPT remake my art for fun and I’m scared someone will steal my drawin…
ytc_UgwJZJ39x…
G
People are really stupid... plain and simple. You know of all the issues with t…
ytc_Ugy7nclJT…
G
@johnjimmy8074 my guess would be it will follow the similar way as movies, games…
ytr_UgzBtHDDt…
G
AI also helps to hide truth to a large extent by purposefully showing only mains…
ytc_UgzMgTYvV…
G
(Yea just dont read this brah)
---
The Qualities of AI-Resilient Careers
The…
ytc_UgxKpBcam…
G
Can't even call these people that use AI artists. All they do is tell an AI to d…
ytc_UgysZM2hV…
Comment
Another point: considering that these models were trained on Reddit, 4chan, and similar platforms, how the fuck are people surprised by extremist opinions?
You are what you eat, right? You feed a probabilistic autocomplete model with all kinds of shitty text, the model starts vomiting shitty text, and suddenly we’re dealing with an “intelligent alien that hates Jews”. Lol.
It’s like training a model on erotic novels and then being shocked when it starts talking about sex with users.
LLMs don’t think. They don’t understand shit. They just predict the next word based on previous training. There is no threat, no danger, no fucking intelligence. It is a computer program that does exactly what it was designed to do: produce text based on pre-trained data.
You don’t want an extremist model? Then don’t train it on extremist data.
This sloppy training happened because it was easier to feed the model with all kinds of garbage than to properly curate the data. Now they create this stupid fear-mongering narrative (that AI is an alien or a monster) to hide their own responsibility.
The model gives us exactly what it received. If one day a model does something genuinely damaging because of this kind of negligence, the guilt is on who made the model.
Man, I have no words to describe how much I hate this discourse. Fear-mongering is a tool for controlling thought. Don’t fall for it. It always benefits some group of people, almost always the very groups that propagate it.
youtube
AI Moral Status
2026-01-05T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy_OJ_p45jxXgt-2D14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPMo-3m2TPWh9SFEx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgyvYuE-9tPhCkRp0P94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzixhn74VQqUmGfHCB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzpIQRvcrnfJFBJ1Kl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzQ-rBbpNOLqvmeVpB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkxPv7k3fvH1-0gXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzWWkk3LpLZY6T865l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx0rcjN3iZpHC0zssZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyclQvkMKbOgOpjWVt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]