Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm a programmer who went to art school, and I haaaaaate the things AI is doing in general to society and at this point have mostly opted out of development as a career path on ethical objections about where tech is going in general. I share about 85% of the same gripes you mentioned in this video, and a large swath of other criticisms mostly related to privacy, deception, and the basic fact that AI is dangerously unaware how full of shit that it is, and has no business being used for any task that requires accuracy unless its training set has been manually curated to be pure (it never is). I also have very serious gripes about how it has reduced any interest in actually gaining a skill directly because you can just push a button and get the result. For example, if you are great at math, yes you can just grab the calculator in your pocket, but if you suck at math you won't even know when it is relevant to do that half the time and a ton of opportunities will just fly over your head completely unnoticed because you have to grasp math well enough to even notice when it could help you and you should bother doing a calculation, which the phone app can't do for you and neither can AI. Same is true with grasp of language, music, art, civics, and about every other thing people don't bother learning. This also applies to art, and it's kind of a given that nobody just clicking the generate button on gpt knows anything about the golden ratio, field of view, bezold effect, or a billion other things that a trained and practiced artist would know, and can only feel in the dark for an answer because they don't even comprehend what is wrong with the image the robot regurgitated for them when it looks off. I am very, very glad to see that at least somebody is developing tools to protect artists. But this got me thinking if there's more that could be done under the hood that the artist community would be unlikely to address directly, such as perhaps a web framework that is intentionally hostile to AI scraping and misdirects it to broken files on purpose to protect contributors, or maybe some other way to sneak broken metadata into non-image based scrapes that would cause model breakdown faster. This would at the very least, force manual curation of training sets, which will allow human beings with actual abstract reasoning to vet it, and additionally would put actual humans that are valid targets for prosecution in the pipeline so some recourse is possible legally when theft still occurs. I am not explicitly against AI in general, but I am very much against the yeeehaw reckless cowboy usage of it that has very quickly become dominant, and pretty much all instances of it being used as an excuse to fire people to save money with a totally inferior alternative for everyone else. Just wanted to let you know that not all "tech bros" are on the other side of this, we're just massively outnumbered by the ones that are and the corporate paychecks that incentivize them to roll with it.
youtube Viral AI Reaction 2024-11-03T19:5… ♥ 3
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw0BYJH5KrnBia3oGd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyzskeS5JaAJM8KcJZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxyFdgy25AiHh_Yczh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6_lGEoFjr5PXsn7l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw8CHIUSh6tDBU_e1B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwU0I_diVRbdFvBi_t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwpoL8RTg1SZnWKAKV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugy8GglHYhio_bESJZB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugw-WesTtuDtiaOcWHp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzIxhocK_xEd8rtzYF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]