Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can empathize and agree with the moral and ethical concerns that artist have in regards to AI art, but some of the stuff this video shows, especially comparing it to nuclear weapons, is just straight up fear mongering and a logical fallacy known as catastrophizing. The worst case scenarios are brought up here, and I can't help but compare it to the anti-electric propaganda of the 1900's. I think that as the technology develops, a few hardships will fall on the artist community and some artist will inevitably lose a part (not all) of possible income they can receive, and that's terrible. But I also think that the integrity of art through human hands will persist and flourish as it always has sense the first cave paintings on cave walls and possible benefits and upsides will be apparent to artist and creatives alike. (Such as writers who can use the program to generate inspiration or scenery for fiction) So, like with most major leaps of technology, I expect the end results to not be utopic, but not be disastrous. To be good to some, and to be bad for others. I personally think Solar Sands drives the same sentiment that I feel, but maybe leans more in a direction that is more catastrophized than I'd agree with. And perhaps my opinion of the technology leans in a direction that's more optimistic than I should be feeling, either way, it was a wonderful video and helped me understand the intricacies that AI poses on the artist community.
youtube 2023-02-17T09:1… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzRybU8KH37uKtVGSV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzO3eyo3JWiPU70m8p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzorPpzFvHF-ul7gvN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxhZ3rfCxqLdZm_XRt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy3XYLgpxdJQ1E4BcN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzrqs4gtq88Ixk9l7t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwCDJYJyEdTHS-N7ah4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyWwj5_pV_kX3rsvk54AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw51SLM09Z3RD2EkLJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzaIuR6WlTzHc8DGd54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"disapproval"} ]