Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Have you seen how your spell check doesn't work at all?
Or all the out of date …
ytc_UgwrOyKIS…
G
I don't think the AI is the problem (per se, improvements are being made as we s…
ytc_UgzWlQm9U…
G
I suck at art, especially humans.
I use AI as a tool and to pass time.
I WOULD s…
ytc_Ugy6LX60W…
G
Lol, at this rate, you guys probably generate more ai art than anyone else.... A…
ytr_UgxH9p6t-…
G
This is actually good. AI will work its a*s off and we will just chill. Imagine …
ytc_Ugya0mhI2…
G
@owo2610 my bad. ChatGPT 4, not 4o, you're correct. However, there's a lot of ex…
ytr_UgxBG1byk…
G
All AI should be given a copy of Frankenstein to read, so it can understand its …
ytc_Ugw4c6EeG…
G
i found out from a school teacher they been using playground ai its straight BS …
ytc_UgwXn1kZV…
Comment
That "A.I." bro I previously mentioned argues that "overfitting" (outputs that look too closely to other things) can be fixed by simply "feeding a more diverse database" and that it isn't considered copyright infringement, that "it's only plagiarism if you want to use it in that way"...which is a complete lie.
If copyright is fine in just the way it is, why do some people still whine about their prompts being "stolen" while drowning out the voices and works of those around them?
"If you know how AI works you would understand that it is classified as overfitting, a term used as a flaw in any neural network (even a human mind)."
"Even a human mind"
The fact that optimization can "feed information to a neural network", have a model reference it and redistribute pixel groups without directly using the optimization data does not change the nature of what is going on. We don't teach someone the ability to illustrate because there needs to be more images in the world, but rather to have them learn to do it on their own because if they don't draw, this will reflect in their cognitive ability to actually draw in a way that truly reflects them.
This is to say that if we rely more on accelerated algorithms to fulfill certain duties (creating a character profile/generating simple backgrounds) simply because one considers these new algorithms to be the progress of technology, the more they would be needlessly used to that end without really improving one's own results or one's expressive abilities. The only explanation for what is considered to be a sign of value being that something was "difficult to do" or that "it took a long time".
When "A.I." is concerned, it's like fighting for dominance over the canvas TO FIX OR HIDE FLAWS THAT AREN'T EVEN YOUR OWN, only these "tools" were designed to generate vastly more than what you've actually produced, making them inefficient tools for expression.
... It's morally wrong to deny people of their ability to be compensated for work that they've learned to do by learning skills with older tools when the means of developing the new sets of tools exploits these skills by passing them off as your own, especially when you are comparing people with an OBJECT that processes data... telling me that my brush does the work for me... and some people have the gall of asking me why there are others who care.
["A.I." bro:] -------- "Certainly I'm not here to convince you how much digital art and 3D are already AI. [...] knowing how AI works, I realized that it is impossible for a tool like content awareness to exist without a neural network, and, by no one saying they consented their work to this neural network, I assume it could be trained on unlicenced material."
Is this a joke?
What did I just write a couple paragraphs above?
Please tell me I am not being brainwashed, right now... Actually, no – shut up and get out of my sight.
Tech bros want artists to "stop attacking them", but let's be real—who chose to pick a fight with others in their own spaces without any consideration for what means of expression they value?...
I am sitting in front of this screen, literally being treated like I'm stupid.
Right, this is the internet—it's full of nonsense like this, only it's been leaking out into almost every other aspect of life, and I'm pointing it out.
More data “solving” everyone's problems is like saying that beating me until I go quiet would be the solution, like they don't have enough with everything mankind has posted online—what's next? Will they find a way to make cloning legal with their "A.I."s and take my blood, too? Why stop at data when you could have much more? I guess being useful is really what matters most, after all, and anything “useless” should be thrown away from the equation of "progress".
You know what? Screw it- Let's just go back to eugenics because those who have the resources deserve to be the best superhumans there can be. The ends justify the means because we have the power to do it, and that makes us right.
...is my idea of how a psychopath would think.
Since we are living in a world of con artists, copyright would be favoring them.
"But A.I. is just a tool like any other—it's all the same thing until we decide it isn't."
And with that, what even is plagiarism? -- according to Google, it's "the practice of taking someone else's work or ideas and passing them off as one's own."
Oops.
But why care when you can just rewrite the definitions of words—or perhaps you'd prefer calling it "ADAPTING" their definitions. Looks like I should know my place because an algorithm processed what I made, and people decided that no, it's just semantics, and that it has nothing to do with me.
Are they just going to continue trying to make me believe that machines are responsible of being optimized, but then decide to claim ownership of things when it's convenient for them? It's either them or the machines—am I alone in thinking a lot of people can't make up their minds, anymore?—, but I'm not gullible like they seem to believe.
It's cognitive dissonance... It's muddying intent, making a mockery of who we are, which we are taking for granted.
For goodness' sake.
This isn't me trying to own information – I am pointing out how some uses of technology are fundamentally dehumanizing because of how they reduce intent by blurring the line between humans and machines and make us out to be less than apprehenders of meaning. You can tell by how much people like this "A.I." bro I've mentioned obsess over originality, being replaced and where the industry is heading... or when people talk about "predictive policing", as in manufacturing objective judgements only to divert blame onto machines... or when someone makes me write to an accelerated algorithm without telling me in the first place and then laying this on me: “because I’m transparent, I reveal to you that I’ve been using Google Bard—and now, I hope you can see the potential of this technology”
People are becoming far too comfortable for their own good. Or maybe they aren't?
Wtf do I know – I just really can't figure people out no matter how hard I try.
youtube
Viral AI Reaction
2024-12-17T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxsjFXZpRM4emHyksV4AaABAg.AC7zbb-FXuLAEFPgdcHnBp","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx9mfyQZYHoIp8AbTR4AaABAg.AC7lRV0XrzoAC7lXurIPgG","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugx9mfyQZYHoIp8AbTR4AaABAg.AC7lRV0XrzoAC7p8fzbj_q","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwnFDsbXXFzMeLlgHd4AaABAg.AC7A-VpG92_ACCqRx3JsO9","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwQ9pCaCvvh8h--Oy14AaABAg.AC5oRvM30GkAC5vnhtwLim","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwChG9m33V72LqBjxt4AaABAg.AC4khIB1xMVAC8MgxS-Qv8","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgyI0BgR8dQwXk6YPtd4AaABAg.AC3nBML6aTNACCqtx1ucMs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyYqcglmnbZUBaRq994AaABAg.AC3dMfZ8XEKACCrA6vnDX3","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzpoyQQoE21g1Ohodh4AaABAg.AC2mIWeP9E3AC3jMydFPNH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugynlv1nID-6eTwLZoN4AaABAg.AC1c8t9SSkrACXUBILn-Yp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}
]