Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If we, as humans have any sense of survival, restricting certain Ai models is vi…
ytc_Ugxj-Qku7…
G
Art is always a form of expression, and it depends on the artist being able to c…
ytc_UgxN8cmcW…
G
Everything she has said has been negative all a negative spin.....
This is no di…
ytc_Ugw59ZN7H…
G
Theres an ai artist who goes by ai slop on instagram i wanna bully him by taking…
ytc_UgxJQIaO1…
G
I have blue eyes so I know I've got Neanderthal ancestry. My theory is that ETs …
ytr_UgyjkoQiU…
G
As parents with a depressive child, you should have been there at all times. Don…
ytc_UgyahM2qS…
G
Most AI data centers have their own dedicated power. As for us, the same as if i…
ytr_UgyyglVZF…
G
first it was memes, then i started seeing full-on threads about elias velin and …
ytc_UgyPZhFSo…
Comment
I like how the overgrown kids believe that you can "poison" AI, or "boycott" it, and having other funny images in their minds, knowing nothing about how AI (NN) works.
This BS will not work. AI works in the same way as the human brain - it's a system of neurons that has been trained to "be such as to produce a result similar to [this]." Training an AI is like holding a person at a gun point - either it adapt and do what you want, or [ended]. But imagine if you have billions of those persons, and you're doing the dropout in a split second, and they're all also learning from the fallen.
If you are able to draw a normal picture from a picture with "noise" - AI is capable too. The proof that AI neurons are similar to the brain is multiple experiments, for example, with mouse neurons, in which these same neurons were literally trained according to the same scheme as electronic ones, and this gave the same results (but more powerful, of course, since more n.quantity). For example, one such neural network even piloted a virtual replica of a passenger plane.
Even if all the internet will suddenly become filled with a 90% noised images, it will just mean AI images will be the same 90% noise images... but then also probably another AI will remove this noise.
It takes literally a hour to make an AI based on this NS, which will be removing anything it adds automatically. Not only that, you can also just create an AI that will ignore the image data itself and will just "see" the image instead, like people do - this will make everything useless. Not like anyone will want to, because as I said, AI already don't care about this BS, and even when it was done, 300 gen artifacts were based not on it, but on the bad AI itself (which is based on a CGPT, which is a junk, and not even supposed to work with images).
And yes, I tried it myself, and you can try it too in the google collab. I done it with the old junky diffusion models, which were most likely used in the examples, judging by the quality. Btw, remember the simple fact - AI needs more than 300k examples to train, so don't just create a new one based on your 1 art, neither feeding 1 art to a ready NN (retraining) will make a difference. You don't even need a poison to broke a ready to use AI with your 1 art. Especially if it's bad. E.g. - some AI are bad with hands for one simple reason - most of the artists are morons and can't learn how to draw a hand properly, making a lot of mistakes, so AI just thinks hands are "random" and supposed to be this way. AI which been fed a normal art don't had that issue (or with the extra training for it).
Lastly... a thiefs, huh? But how much in your art is yours? I've seen at least 100 more other people with the same style, or the same parts of the style. You are nothing different from the AI, same as the 99.99% other artists. All of you either steal from other artists, or from the nature itself, and that's what NN are doing too. Stop thinking it's something big and hard to do, it's a low level job for kids, and that's why AI can learn it so ez.
youtube
Viral AI Reaction
2025-02-22T05:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyJNKNj3GN5_ahwh894AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugw_AhVwij6Ok_WnzDx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwMrfezd6pOjccxkxV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx8vBEZyk8eFk7b6j94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgyYYV9xbyPc3wiJju94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgwBQrdd2inBVjp7DyN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxDgajcnHmDoIy18GV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugy5bgcyJ2DtHy-u-Tt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_UgxcR7sdMUL7FDd3_2x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgyIO_y3tmwAYL-YwMl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"})