Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Even if the images are taken without consent or pay - the AI program is not doing anything legally wrong, right? The person typing in a prompt is arguably more at fault than the company making it available, no? Example, the gun maker and the shooter. Gun makers are not at fault for crimes committed with their product. On top of that, typing in a prompt isn't illegal, you may be able to argue an ethical point of "well, should people be using AI art?" But can their be legislation prohibiting the use? No, of course not. AI company's are not stealing any work nor are the prompters, technically. What harm is there from a user creating images while they use the restroom? Or wait for class to start or just as a way to kill time? If only that one person sees the image, is that any different than viewing pics on the internet in general? What about if I take the Mona Lisa and save it, edit it and put it as my screen saver? Is that harmful? Is the harm when someone sells the AI art because its taking a potential buyer from the original artist? The whole bit about making artists look bad doesn't hold water to me, the artist can always just come out and see, hey, that's not my work, and clear the air, no? Should the AI company's continue their practices they've been doing and just grabbing all art that is on the internet, no, that is, in my opinion, bad. But this seems to be a Pandora's box type of situation, the actions were already taken.
youtube Viral AI Reaction 2022-12-27T07:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzRvlnOZFPQJ0FD6ip4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxwk8zL5RlugpjrX154AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxraWcc9UuBnsJBkCR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz1RZfpO6t7ou06RPd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_GEo2H6SFxjJAuSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxnIglTBd2rS1RqPWd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzeZJnBXm5Ee5X96Ph4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-YdvtpzW7yi7g9dN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwemJi_QRC3SlVZzEN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwzZqkBHHrjssGJxl94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]