Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have an excellent counterpoint to it not stealing because it's doing the same as the human mind: What people don't think about is that the AI isn't in fact human, but a product that is actively being sold. When products in the art/animation/design world are developed, it is industry standard practice to pay for the expertise behind the work you use as a resource to train your tools. The same has been done for Adobe Photoshop's tool that lets you erase stuff from pictures, and training software for physics engines in 3d rendering programs. So yes, while it does sound like it is a human, it is actually a product being developed unethically because the people whose expertise is being used (the images the AI is being trained on) not only aren't consenting to their work being used, but they're not seeing a single cent of the profit. Normally they would have been hired or their work would have been bought, and let me say it again, that is STANDARD INDUSTRY-WIDE PRACTICE. The problem with having this argument is that many people who are not in the industry who don't know how it works are trying to argue ethics just because the AI sounds human. That is not how ethical decisions should be made. Regardless, an artist in financial hardship who fears that AI might replace them should NOT have their work forcibly thrown in there for its development. It's plain unethical no matter what your stance on the technology is. What we need is an AI model developed with an opt-in system and paid professionals, just like the ones used in the music industry. Ah because yes, music ai's actually don't use anyone's music unless it is royalty free samples. That's also part of the unfairness.
youtube AI Responsibility 2023-01-10T15:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx82rtdERRbC6lh7N94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz62n_8j8BNAo5u-MB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwrnttJMIFdO3KHIEJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzlE64gYiufH8OAvzF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwhNz_M4xr6720oi0l4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzDm3yv5JwqaxbTNaB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzNbfR4oBoTfXJSzpN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyOna7XvSiC-yWvhxN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzYMkMAkMkrczroi5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEtFkrqdt76p_R_iV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]