Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@laurentiuvladutmanea > From what I found, this was not about making content or works of art. Irrelevant. If a dataset can legally exist, it can legally exist. How it's used later (and not always by the same people, even) is a different story. (And it's not even always used for profit -- you CAN level this accusation against OpenAI and Midjourney, but Stable Diffusion's model is free and open source. When you're paying for a Stability service, you're paying for computing costs + some extra bells and whistles, which are fine to provide. And with places like Huggingface, you don't _have_ to pay even if your GPU is too weak to run SD locally in a reasonable manner.) LAION in particular have been very tricky about this, even though their trickery is probably just the desire to make things technically easier for themselves. Instead of crawling the web directly, they re-crawled Common Crawl, a dataset that has already existed, been battle-tested in court and deemed fair use. "X is fair use but part of X isn't" is likely not an argument that's going to fly. LAION is almost certainly legal under US law. > It is also much more complex then anything we build, and based on fundamentally different concepts from what these programs have been based on. Irrelevant. The universal equation is [Input]->[System]->[Output] and whether the input is "dataset" vs "all life experiences" (which is really just another dataset, one collected by our senses over years of life) and whether the system is "a bunch of code" vs "a human brain" (in both cases, we're talking about a machine that interprets data, one programmed and simpler, one biological and _much_ more complex) isn't really relevant. If something wouldn't be stealing for one system, it wouldn't be stealing for another, nor for its creators. > No. It mimics the results, but the fundamental ways it does this are completely different. Irrelevant. If for whatever reason I refuse to walk with my legs and only walk upside-down in a handstand position, I'm still walking, even if in a completely different way. > Explain. Because what you said is nonsensical. There is a reason why programs are not give rights to copyright what they create or human rights. Irrelevant. This has nothing to do with whether these programs or their creators steal and everything to do with the fact that we do not believe programs have personhood. At some point, we might end up in a future where AGI actually exists and is sufficiently human-like that it makes sense to declare it a person, but we're not there yet. We have no reasons to believe that a program like SD has qualia and/or personhood. (That said, the question of whether code can _ever_ have qualia is an insanely complicated one largely because we know less about us humans than one might think...) > ...........These models dont do this. They dont look in any way a living thing does, and they have no ability to hold concepts. They have no inner models of anything. They are just mindless pattern matching programs, that are incapable of understanding concepts, and only generate something to fit the patterns in the data set. Bwahaha. Can you prove that _human consciousness_ is anything more than insanely advanced pattern matching?! (Hint: you almost certainly can't, and it's much worse than even that -- philosophy refers to consciousness as "the hard problem", and it's not even immediately apparent whether you can prove that someone else is conscious and has an inner experience. This is why the p-zombie thought experiment about a being _perfectly_ imitating a human but lacking any inner experience/qualia has any merit in the first place. Not all philosophers buy into this idea, even as a thought experiment, but either way, you're walking into a philosophical quagmire of the highest caliber that NOBODY knows a good answer to. We know little about how human consciousness works or what it even is, so it's a subject best approached carefully in the already complicated AI discourse.) > Again, the immoral data gathering methods are wrong. And no, I dont even care if they are legal(even if I doubt it). Being legal does not make something right, and something being illegal does not make it wrong. Legality is just a good shorthand and socio-economical tool. They're legal, and more than that, they're morally right simply because of how many uses (that aren't image generation) this kind of scraping _already has_ elsewhere -- uses that literally no one has ever complained about, because there _isn't_ anything to complain about. (And because they didn't feel threatening.) Data scraping has given us more than it has taken away. > And so is the entire concepts behind this dehumanization of human art. A hypothetical AI model that's built purely out of Creative Commons and public domain data would still be an art-generating automaton, and assuming it'd become good enough, it would _still be_ "dehumanization of human art" or whatever, so... this is also irrelevant. Moreover, your phrasing seems to reveal you'd still be angry even at a hypothetical squeaky-clean-data model simply because it's an art-generating automaton, and if that's true, it raises severe doubts as to whether you're arguing from a rational, reasoned place. 0 out of 7.
youtube Viral AI Reaction 2023-01-14T21:3… ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningcontractualist
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugw-FgPnivIYVqNeGSB4AaABAg.9ktG2rxtA409kwYo1jF0N4","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugw-FgPnivIYVqNeGSB4AaABAg.9ktG2rxtA409l29bhU3-gE","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytr_UgyKZ57FLJvVAmwqL0h4AaABAg.9kt0rKBzbOD9l8_4GC7hWT","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzLxMDzxMQQ0gwQCXZ4AaABAg.9kry1MnRO0J9ksPD3rog_I","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgzLxMDzxMQQ0gwQCXZ4AaABAg.9kry1MnRO0J9ksSvSqNqrF","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"}, {"id":"ytr_UgzLxMDzxMQQ0gwQCXZ4AaABAg.9kry1MnRO0J9ksgZI1I5R-","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzLxMDzxMQQ0gwQCXZ4AaABAg.9kry1MnRO0J9kso9gNKyd5","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_UgxfrehYccz1uXW-OP94AaABAg.9krtCE5MMuE9ksOs6vjve9","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugx-HkwsXLYVGobY54t4AaABAg.9krqp36NfCh9m5ercUWcia","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugyv3x7PFc4in4CBGLR4AaABAg.9kpoaaB9I1C9ksQmda2NvO","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]