Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Liked the idea behind the video, kind of inspired me to spend a month and a half to create a model *almost* Invulnerable to nightshade, on high intensity and slower quality, i got around as low as 0.007% to as high as 0.3% difference on my own artwork before and after nightshade. Don't misunderstand me, i like drawing from time to time. What i was wondering about... you're using AI to fight AI, so what makes one model a "good" model? Is it the fact it hurts another one, that took so much time and work to train? I don't necessarily want to spend hours drawing just to replace a profile picture, that's why i created a model that was fed all my art (it kinda sucks cus the lack of training data) but it does what it is supposed to... I perfectly understand why most of the artists are upset about people generating "way better" art in seconds, than most of the community could ever create (including me). You either keep your art to yourself, or setup your profile so that it won't be scraped off of the internet so easily. I hate to see such a great community in shambles due to the fear of a technological advancement. People accusing each other of using AI... generated such hate in the community that the accused must private or delete their accounts. Since this is a sensitive topic to most of us, i won't be sharing my progress with this model anywhere, ever...
youtube Viral AI Reaction 2025-01-25T00:4… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzbB-jmHGGn9bXhntx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgykIe5HnRHzJuykYAV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyip1bsVlXEmlaL1VV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgydGtLI7wibhqWBsud4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7o8LVd6gm_IF12eJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx9YJcBDHgBckZzuKJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzpcv4ODb-wl6854HB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxECCn4eGsMHsEYBOR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzHDjfB5qNyQpe-KnN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyapt-3BORuZEx02QV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]