Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Its really easy to find credible sources that criticize Glaze... at least if you look further than just the top 3 results on a Google search that is. Among them "SPY Lab" from the ETH (Tech University) Zürich, whos articles i and analyses i can highly recommend, because they are going over the significant weaknesses of the tools. And i dont know how you can think that anything did affect AI training, considering the progress that the models did just in the past few months alone. By your own arguments of "i havent seen anyone disprove that poisoning AI works", then you should also ask yourself, if you have seen a single credible source actually proving that it does, outside of the show-pictures from the very website that is advertising the tools and that on a scale that is not completely negligible. And the whole argument of "i havent seen any big AI company comment on this, so it must work" is completely bullocks. If it is not a threat to them whatsoever, then why would they care to make statements about it? If it were an actual threat that affects their models, then they would be legally forced to make statements about it, because it will affect the product and is therefore necessary information for investors. Cherry picking sources and bad examples of AI generated imagery is not going to help to form a realistic view on the situation and might just be misleading to your very significant viewer-base.
youtube Viral AI Reaction 2025-03-31T14:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxGn0-w-DQVtaLAE-N4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5TXYr-Tnb0-Oek5p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugzzyz-yYhlMyIeCi554AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwqLMUUexqXHKKauA14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwkIhkOJB7vd3y1SRp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzE6p-KfIJ9SOrfBfJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgzaXF4g0Cx0hZvOZuB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugw0eyjbmVC5XJyHQJx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw3DjGVpRH2qZLtURp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx93hNsLr623UrHJYd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"amusement"} ]