Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@harrisjm62 1. Wrong. Humans learning from other humans is what would be called "fair use". Artists put the work of other artists through themselves and change it to fit their own world view. Every person SEES art differently, let alone creates different pieces on its basis. AI can't do that. AI copies someone's style directly, without change. That's literally stealing. They are stealing decades of someone's work. They are stealing someone's entire life. There IS a difference Having said that, I agree if we're talking about buying a product vs. buying a subscription/license. I NEED to own the entirety of Supernatural and at LEAST Microsoft Word. Billion-dollar corporations need to stop milking literally everything for more money. What are they even doing with so much of it 2. See point 1. Of course this sort of stealing is completely unethical and harmful, as a matter of fact, a lot of people are being harmed by it already (real humans who are being flagged and bullied for AI use just because AI was trained on their, or a similar, art style). Not to mention the unbelievably large amount of energy resources that go into maintaining and training all of these generative AI tools, which directly (and swiflty) impacts the environment. Is that ethical? Is that non-harmful? Does it even matter? You are free to decide 3. So... we'd have to legally define who is and is not an "artist", and if you are, your work can legally get taken away from you by anyone and used for any purpose, like making Nazi memes (but you'll get a few cents in your panhandle left over from taxes, as a treat); and if you're not, you get to keep your work but can never get any money for it, even if somebody wants to commission you (because that's creating a monopoly and you will be executed on the spot) I don't see how this could possibly work outside of a dystopian society, but maybe I'm wrong, who knows
youtube Viral AI Reaction 2025-08-26T21:1… ♥ 102
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgwOPW8YUjvz0QlSpld4AaABAg.AMIFa7h5wkgAMJ8jBxCFCp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwOPW8YUjvz0QlSpld4AaABAg.AMIFa7h5wkgAMKtKmxF85i","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwOPW8YUjvz0QlSpld4AaABAg.AMIFa7h5wkgAMMa9dXFmCw","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyN7sOxPRh74SdNsAV4AaABAg.AMI9lM37e46AMIDZK3JLEJ","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyQ3jDLyTsCIexPpVR4AaABAg.AMI5FWJ5YTGAMIorRAh1eV","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgyQ3jDLyTsCIexPpVR4AaABAg.AMI5FWJ5YTGAMJJ6yHgIGz","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgyALdyMufM9kHo8FwV4AaABAg.AMI240RDqfPAMIXbTHiF4n","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytr_UgyrIhWcKHiMx91qQCN4AaABAg.AMHzAIJXpAZAMHzlOcdb-e","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyrIhWcKHiMx91qQCN4AaABAg.AMHzAIJXpAZAMIEKZ55zie","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyrIhWcKHiMx91qQCN4AaABAg.AMHzAIJXpAZAMIgN-bpMOv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]