Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I get your point, and you ahve a few good ones, but i think you misunderstood a few of them and mixed up some details. If we take the accessibilitsy argument for example, AI art is completely free, and i'm perrsonally running it on my local machine since the it's much faster to generate, but i am not paying a cent (besides the little bit of electricity) for it, while i would have to spend a lot of money on an artist to draw/make something for me. This argument is very likely not about the cost of drawing yourself, it's about the cost of getting a good-enough result. There were also a few mixups between AI, ML and DL, but i'm not gonna go into detail right now. The for poisoned art, maybe it will continue to work, maybe it doesn't. At the end of the day you don't know what datasets these companies use, and i kinda doubt they just throw away their datasets after using them, i feel somewhat confident openAI still has their dataset where they trained GPT 1 or 2, and if poisoned art or texts or sth is going ot become a serious problems, i don't see a reason why they coulnd't fall back to those. Currently poisoning art like you're doing it does work, that's not debatable, you can't just restructure the entire model architecture and retrain the model within a week, not how it works, but it's definitely going to become a cat and mouse game.
youtube Viral AI Reaction 2025-04-01T14:3… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzimQSgI21IXinrGyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyIh1lZffE4vtZxMhl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzTTHRdDXo-nBr04kB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"sadness"},{"id":"ytc_UgwwuhIZTuvBZ9Bf7754AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzCZOuP8AMNzapsSqd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyxJ_eiuaWA4nlMhVZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_Ugy9MJdBdAPA8HAcmL14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgwRiT-oWbt_f6pAtBl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgwHPLr-RlQGaOZQ5aJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},{"id":"ytc_UgzADhNKPzJ5GvQ_omd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]