Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@laurentiuvladutmanea First of all, thank you for the input. But there are many flaws in your arguments. Let me answer in order: 1. But still many cosplayers and fanartistsmake money from it. Which you know perfectly well. Be it costume design from existing IP, 3D printing of props, influencers, adsense on youtube, commissioned photo shoots. The extent to which something is a passion project does not cancel this out unfortunately. This would no longer fall under fair use. 2. Now, in some cases they might do that, but usually that results in a big shitstorm, but since that's what you want now, maybe they'll put up with the bad reputation because that's what you folx want in general, and some of you want to claim moral consistency, or at least pretend to have consistency. Also, enforcing what you are asking for would require automated processes that would make it a hell of a lot easier for Nintendo, Disney, and the like to pursue copyright infringement. 3. referencing and collage, as long as they're transformative in some way, are protected under fair use. what you're asking for is to ban something, even if it's super transformative. You could argue that Text2Image models are far more transformative than what an ordinary artist does with a portrait photo of someone they are referencing. That would give the hypothetical photographer or rights holder a strong argument to sue. 4. fair use is not a worldwide issue? Well, neither are the changes in the law some of you propose. For exampel the Gofundme. So what would stop other countries from doing this in their territory... wouldn't that make things even less controllable? Do you think China, Russia, et al. would not continue to train if there was money to be made? 5. Large language models are not designed to provide facts, but to perform language-related tasks and process data. However, the methodology of these systems can be applied to many other tasks.... code, for example. Your argument is very flawed as this is not their purpose. Similarly, one could argue that cars are bad because they are less cute than horses. 6. protein folding algorithms are also trained on large data sets. People's data has been used in them without specific consent for just those models (since some of that data was collected before those protein folding algorithms even existed. does that sound familliar to you?). Looking back, it would be the same thing to say that I don't want my sample to be used in medical algorithms now that they exist. Certainly that's not necessarily a realistic scenario, but it would have to be circumvented if you get your say. And that would slow down progress and it would cost human lifes that could have been saved ultimately. "The very fact they used copyrighted data and refuse to rebuild their models on public data only debunks your idea." Sorry, but this is not true. Stability AI, for example, has made extreme cuts to their current model. And that is a process that is normal for new technologies. The boundaries have to be explored first, and that's normal. I'm not saying that no data should be safe, but the thing is the way we want to respond. You guys want to literally cripple art as colateral damage. The transformative iteration of others' earlier work is an important backbone of all the arts. We all stand on the shoulders of giants. Where should we draw the line? therefore my examples. Instead maybe think about what TOS you agree on in the first place. "And we have no evidence we will have AGI in near future, and all evidence points to this not being the case." That's not true. There may be no evidence for it, but there is no evidence against it either. Sure occams edge and all but still we should maybe be prepared, don't you think? And most researchers in the field predict that it will happen in the next few decades. Are you more in the picture about AGI than the majority of CS? Instead of trying to bend technology to fit our old way of thinking, we should instead fix our system to be prepared for the day when AGI comes. Make it Futureproof. Otherwise, we will hit the wall without any protection or preparation. Which would eventually mean complete chaos, anarchy and death. "ChatGPT costs 100 thousands dollars per day to run, and was trained on orders of magnitude more data then any human would see in their life, and it still fails at being rational. " $100,000 per day is much less money than it would cost to produce the same amount of new data as ChatGPT with the same quality of data with human labor. It's absolutely illusory to think that it's not effective in that regard. It is very, very clear that you have not looked into the matter neutrally, but instead just want to confirm your own bias when looking things up. "Language models have shown they need exponential increases in size and data to get less then linear increases in accuracy and quality of output. What makes you think the same will not happen to these images?" Well, because you are not taking into account the advances in model architecture and methodology. It's not just the amount of data, but other aspects like the multimodality of the models that improve their performance as well. Do you really think there won't be more advances and we'll just throw more data at these models and do nothing else to improve them? Deepdream is not the same as StyleGAN and GLIDE was not the same as Dall-E 2, Midjourney and Stable Diffusion. There will be a lot of improvements coming soon. A major criticism of diffusion models is the difficulty of controlling them and getting specific results. What happens when you combine them with LLMs, for example? "And artists adapt by calling for regulations on these programs" Well, what you are asking for is a band-aid to slow down progress for maybe 1-2 years and nothing more. What's going to happen after that? I am sorry, but not a single point you have made is the slamdunk that you may think it is. I know it's impossible to imagine how crazy the next few years could get, but I honestly think we should prepare for the craziest scenarios instead of trying to maintain our good old value system and therefore not see the train rolling towards us.
youtube Viral AI Reaction 2022-12-26T17:4… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgytOalwzjA2nEqJ5Nh4AaABAg.9k4RKb6NhOy9k4Zq8xdWvG","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugyqz5GrUzdM81nnqqR4AaABAg.9k4Ng24A1YZ9k6eSKOD09K","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugyqz5GrUzdM81nnqqR4AaABAg.9k4Ng24A1YZ9k6gqxG7ps9","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugyqz5GrUzdM81nnqqR4AaABAg.9k4Ng24A1YZ9k6iZDEvUyh","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugyqz5GrUzdM81nnqqR4AaABAg.9k4Ng24A1YZ9k6k4NPlYR2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxriWVjNlNlQK2obFB4AaABAg.9k4N_5ehTTa9k4ez_KE1h2","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugx1Xg5kBZke8SuVtWJ4AaABAg.9k4J2Yc2g9y9k4fNWICodM","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"}, {"id":"ytr_UgwyaWuUAgUBIYVUdTx4AaABAg.9k4J2GExvs09k4KzRfuzFI","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwyaWuUAgUBIYVUdTx4AaABAg.9k4J2GExvs09k6SknzPg8f","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugzevgq1qhjyxR8WqMJ4AaABAg.9k4HMk9ut5B9k4S3ISArEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]