Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
TLDR: I think that art neural networks are useful to the society in general, but artists should have more control on weither or not their art can be used in a dataset. I am a game programmer. While developing a game I sometimes need a lot of trashy art for placeholders. Portraits, item icons, background images, etc. This art won't be ever used in the final version of the game because as you mentioned in the video, generated art a mess and it looks ugly. But that's a perfect stepping stone for me to show other people a rough idea of what I want to do in the final game. And I am using it not because I am lazy to learn how to draw, I am using it because I already opted for a different field: programming. And obviously its not like I use it instead of hiring an artist. I would draw it all myself or use some text-based icons, but that's more time consuming and I will throw everything away later anyway. And I still need a professional to do it at some point. So its no harm done to anyone, right? And that's a useful thing to have. Still, even considering this specific use case, I think that training dataset creation should be regulated more. Artists should have the option to allow/disallow to use their art in a datasets. I also think that if you use someone's name as a prompt, this person should be compensated more.
youtube Viral AI Reaction 2024-10-24T11:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxrTa6QbHJpCcydFbR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzV7rEUi5hHFgFn7v54AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzrDRecKhFwXPQC01N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwPyx_TGr-8YiGxY-N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwMu2zP_ySvfnftO_h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxWbU9zcUDAGlOi3Ll4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzJn_wxKxYggazq0f14AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxmF8IGeoX2eD1VQRF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz98HPrXeeL5LuvXBJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzdGmnMySzsODMIWpx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]