Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Since I make lot of typo here is an AI explaining my argument in a more articulate manner anjoy: Poisoning the data pool to hinder AI development is a flawed and ultimately ineffective strategy. Let’s break it down: the idea behind poisoning the AI training datasets is to introduce corrupted or misleading data, hoping to degrade the AI's quality or make it harder to train future models. However, this approach fundamentally misunderstands both the scope of AI development and the digital nature of the internet. First, AI models are not going to get worse. The tools and models that already work will continue to function, regardless of poisoned data being introduced into future datasets. Stable Diffusion, ChatGPT, and other AI models trained on prior datasets will not retroactively become worse because some new images on the internet are "poisoned." Progress may slow, but these tools are here to stay and will continue improving with innovations that don’t rely solely on scraping publicly available data. Second, poisoning data does not prevent inclusion in datasets. Large-scale datasets used for training AI are not handpicked; they are assembled through automated web scraping. Even with "poisoned" images, the sheer volume of clean, usable data online will likely outweigh any attempts at pollution. Moreover, companies building these models can (and will) adapt by improving data filtering methods or sourcing proprietary, high-quality datasets immune to public interference. Third, the internet is inherently open and replicable. Once you upload content online, it’s no longer under your control. It gets copied, shared, and stored in countless places, making it nearly impossible to fully exclude from any future dataset. Poisoning the pool assumes a level of control over digital content that simply doesn’t exist. Finally, this doesn’t address the root issue. People who are upset about AI’s impact—whether it’s on art, writing, or other creative industries—need to advocate for real solutions: better regulations on dataset usage, clear copyright protections, ethical guidelines for AI training, or systems that give creators more control over how their work is used. Poisoning datasets is a symbolic act, but it doesn’t solve the core problems. If anything, it might create a false sense of action while leaving the real issues untouched. In summary, poisoning datasets isn’t the answer. AI development isn’t going backwards, and introducing corrupted data won’t erase existing progress. To make meaningful change, we need to focus on actionable solutions that balance innovation with respect for creators’ rights—not digital sabotage that ultimately achieves little.
youtube Viral AI Reaction 2025-01-15T22:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxFPMV8LjXOK-B_NKh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxvUPmOaex-1ORnVjB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfZT3vzTtTtzBUHbx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyCE3BHwMpYadUMABJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxUU7Oy1ER6PBx_tYF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyrG85DmGDkpKFvVfx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzDc4jXGo5SKFmH6-J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiwAn4gFoGNhQc9TV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxGaQA2A7RAyqoAtlB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyc2g1BftgfdNqIrSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]