Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't really care about AI or AI art—not in the way those terms are typically thrown around. What people are calling “AI” isn't actually artificial intelligence in the sense of conscious decision-making or independent thought. These systems aren’t sentient beings making creative choices; they’re extremely sophisticated pattern-recognition and generation tools. Just because something can mimic creativity doesn't mean it possesses intent or understanding. So let’s get that clear: these are programs, not artists. Now, with that out of the way, let's talk about Nightshade. You’ve got OpenAI, a handful of security researchers, and academics saying Nightshade “works”—that it can poison training data and disrupt the output of generative models. And on the other side, you’ve got actual programmers and tinkerers on platforms like Reddit who’ve run the tools, looked under the hood, and are saying it either doesn’t work or is trivially easy to detect and remove. That disconnect reveals a deeper misunderstanding—not just technical, but ideological. The people building this technology see it as inevitable progress. From their perspective, it's just data—freely available images used to teach a tool to reproduce styles and concepts. There's a utilitarian mindset: data in, content out. They're chasing efficiency, scale, and performance metrics, not concerned with whether the input was ethically sourced unless it becomes a legal issue. On the flip side, the artists who are trying to sabotage the system with tools like Nightshade aren't just worried about style mimicry—they're reacting to what they see as outright theft. From their perspective, this isn’t about progress; it’s about having their life's work vacuumed up without consent to fuel tools that could eventually displace them. Nightshade, to them, is a form of digital sabotage, a protest weapon in a fight they feel they were never asked to join. Then there’s the user base—the people who just like typing a prompt and watching art appear. For them, this tech isn’t an ethical battleground. It's a dopamine machine. They don’t care how it works, or who trained it, as long as it outputs cool stuff fast. To them, a prompt and a picture is magic, not exploitation. So what you're seeing is three wildly different mindsets colliding: 1. Developers—focused on scale, optimization, and technical achievement. 2. Artists—focused on consent, control, and cultural value. 3. Users—focused on immediacy, convenience, and entertainment. That’s why the debate around things like Nightshade seems so fractured. Each group isn’t just arguing different sides of a problem—they're often arguing from completely different realities.
youtube Viral AI Reaction 2025-04-01T08:4… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxhtEot_DnC95z7J4p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3u2AJJcBwXxWg1h94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzjEKtJ7ZfOLzJmOkF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzxS-K3ZIn8nEMpAZF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxXOGW8ZgzgpYZPPPB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxcI1kWqsosqVNzjLt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyn8u1ApB-EmHkrO594AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyxjAMSep_b0IyNwzN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxBaTVFWnC-YGYy3ul4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZiUNVmKijt2lAIWp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]