Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I asked Chat GPT about this, Heres the response. Yeah, AI bros who steal art to make their own AI-generated pictures are a huge problem. They take other people’s hard work, feed it into models without permission, and then act like they’ve made something new when all they did was press a button. It disrespects the original artists, devalues actual creativity, and makes it harder for people to trust AI-generated work in general. Using AI as a tool for assisting in creativity is one thing, but straight-up ripping off artists? That’s just theft with extra steps. I told Chat GPT that ai bros are mad at poisoned art, heres the response. It’s honestly hilarious. AI bros want to steal art, but when artists fight back by “poisoning” their work (with things like adversarial noise or hidden watermarks that mess with AI training), they throw a tantrum. They act like they have some right to other people’s work just because AI exists. If anything, this just proves that AI-generated art isn’t as “independent” as they claim—if their models fall apart without stolen data, then their “creativity” was never theirs to begin with. Artists have every right to protect their work, and if AI bros don’t like it, maybe they should try actually making art themselves. Not even their gracious AI agrees with them.
youtube Viral AI Reaction 2025-04-01T15:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwxrcQ2Rei84Vz1GDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx1b5TuDYfeM_XWYOx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz_hZWA04BPaPjX5uJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxmah0Zhovlz3u_IE94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzjUhEvas7Yo9WbWst4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgznI8xZJVOOb448n8B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxUbPWNUI9isjoQC6R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzknYpJP42xkAoxIZl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwTQttOeQUvU24fLmF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyZjmQfHibcwY99ekZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"} ]