Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Claim: "Generative AI does not function without copyrighted work, period." Presented Evidence: "Leading figures in AI companies state that today's leading models require copyrighted materials in their training." Counter Claim: "Your argument does not follow from the evidence and is significantly flawed in several ways." Counter Evidence: 1. You used "Appeal to Authority": Just because an expert in the field claims something about AI, doesn't make it true. 2. You used "Faulty Generalization": The expert in the field qualified his claims to be limited to models for "today" (current timeframe and technology) and "leading" (at the high end of the competitive field). The other expert qualified in a similar vein about the need for that many images. Applicable to leading, larger models. In spite of these qualifications, you apply your argument to all AI models, period. 3. You ignored obvious possibilities: With a basic understanding of how AI models are trained, it is easily conceivable how one might make an AI model that exclusively uses public domain, open source, or AI-licensed training materials. 4. You ignored real-life counter-examples: There are already AI models implemented that do not use copyrighted materials. Adobe Firefly is one such example. I could go on, as you have show similarly faulty arguments in other parts of your video, but I now realize this was not a serious attempt at convincing anyone of anything, but rather a rant about some youtube comments you don't like.
youtube Viral AI Reaction 2026-01-06T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw6pzNT6Z8HP7wSZ4J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyc2Zh6LhbBJ-IoARV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwvFZEeAvg7rcm08r14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy7Fpe5WjRl1d6w9lJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwqjmKoa-M_oKMHS4x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxNYAvuwFfaBksd74F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw6p533Q6K1nmHLi9x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxRhZ0xme8bKXpsYAt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgymjvscivWgk2ks4LZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyENoQBodXtL8fI2pd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]