Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the biggest thing for me is this: A human person who somehow has never seen any art piece, any animation, nothing, could start drawing and end up with a great level of skill at creating art. Even if not, the point is that they'd be able to create abstractions of real world things (and concepts). Take cave paintings for example, being among the first ever art pieces. Meanwhile, generative A.I. could never. It _needs_ to borrow, to _steal,_ to be able to create literally anything. If a model was only given realistic photos, it would not be able to create any abstract representations of anything. It would only be able to recreate photo-realistic looking images. It would need to have an active, thinking, conscious mind to be able to see a real photo and create any abstract drawing representing it. This fact is what, for me, refutes any and every argument someone in support of generative A.I. could make regarding fair use and/or "taking inspiration". I could not care less if the output might be able to be compared to human output in how transformative it may look (i.e. "it doesn't look exactly the same"). My problem is the way generative models fundamentally work, and how companies exploit this. If a generative A.I. model was only trained on consensual training data, then I at least wouldn't say it's theft. I still wouldn't view any given output as art though, and prompters would still not be artists. And if someone were to try passing off an output as their own creation, that would still feel disingenuous/fraudulent, even if not being theft. I still wouldn't feel good about anyone using it, but it just, at least, would not be theft. And of course there are the environmental impacts that would still exist despite consensual training data.
youtube Viral AI Reaction 2025-08-11T19:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzvikU_rxb-8lia3wR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwXlls0XQcOhJcmEEh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwPp4hDO-4kZBWTxfh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1tX6wGYGioDmvnkt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyT00htgMd8UooiGrB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw1fb_tPRXIpu4cMoh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwKLDPV1X1un7AdGjV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugz-oud5zxqUlHO4Y8B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxFkLiBHgnNEehjOY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzsbpQH5ovS9gt4RDp4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]