Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Stop trying to win an Ai race and make a product correctly.
Hint: some earth h…
ytc_UgzDHGyf7…
G
Socialistic Revolutionariec Uprising? We all want a future for us, not for Tech …
ytc_UgxPy9lyN…
G
Hey there! It's understandable to have concerns about the capabilities of advanc…
ytr_UgxEzzXv_…
G
so, u mean u will use ai what it has to be used for? just like the writers, but …
ytc_UgxCZeRhr…
G
Really what is in the fun of watching 👀 a robot beat up a human. This stuff has…
ytc_UgwMWGxTR…
G
If you flip that question around about the baseball and the switch, of course a …
ytc_UgyYt7wE6…
G
If you understand, how LLM works it is not scary at all. It is total BS. It is L…
ytc_UgwGDUqhV…
G
Most of those drivers don't deserve to drive ever again jeez... i hope some day …
ytc_Ugx48LcYI…
Comment
I'm not calling it good or bad. I'm just saying this:
It's about as ethical as you learning JavaScript with AI. These models learn from huge piles of public data like Stack Overflow posts and blogs, and some of that gets memorized. From what I know about how these systems work, they do learn, but when the same patterns show up too often, they can end up overfitting. It's basically like drawing something so many times that you can do it with your eyes closed. By current definitions, that still counts as learning, unless we explicitly change laws to not let machines learn, from copyrighted content.
And you could absolutely argue that learning should only be for humans, but the reality is that models are shown massive datasets to adjust their weights. They don't store the dataset itself inside their "brain". If they tried to, no one could run a modern model. They already hit trillions of parameters. The whole point of the lossy training process is that the model internalizes patterns, throws away details, and sometimes over-learns things with high frequency. That hurts generalization, is it perfect? Hell nah.
Take the whole piss filter thing. Probably caused by color bias, watermark patterns, or just way too many yellow-tinted images in the training set. The model ends up memorizing that pattern. That same problem is why AI can't redraw one of your paintings perfectly, but can recreate the Mona Lisa with extremely high accuracy in endless variations. Highly common public image means higher chance of memorization, as it sees more
The real limitation is that these models don't generalize like humans. No iterative thinking, no re-reflection, at least not yet. But it might change e.g.: Samsung-backed HMR paper, currently that's focused on text, tomorrow it might be about image, if you won't hate me, I have a few ideas in that field as well.
And yeah, I'm down to debate this in a non-hostile way, over text or for content, especially to get an artist PoV, e.g.: yourself
youtube
Viral AI Reaction
2025-12-05T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgySvOI1Qfk6x2btcbp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugy_p9tXbjYiVHNs2-d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwoshSjbgsGkQtXRAR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugxt3K8oKrWbKTtQgpt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_Ugylaov3RJGZ2m32kl54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzS0LJIxzW7PRnWtcV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugz2yRfQHJ9CuegAY894AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgwamSdKs3LIWP3qhop4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgwO8ox7S7Q0cV4D2td4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugwp3at5i0g9pi9eEUF4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"disapproval"}]