Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Really solid breakdown on the Anthropic suit-$3B+ in damages for allegedly torrenting 20k+ tracks to train Claude is massive. But let's be real: if the big publishers (UMG, Concord, etc.) win big here, it's not going to trickle down meaningfully to small/indie creators. The payouts and any future licensing deals will overwhelmingly flow to the majors' catalogs, giving them even more gatekeeping power over which AI models get 'official' access to high-quality training data. Mandating opt-in consent sounds fair in theory, but in practice it just means indies get sidelined-majors negotiate fat deals/equity, while smaller artists either opt out (and fade from the AI ecosystem) or beg for scraps. We've already seen this pattern in the Suno/Udio settlements. And here's the bigger wildcard: we're on the edge of AGI-level systems in 2026. Once models can self-train heavily on synthetic data (bootstrapping from minimal seeds + their own generations), they won't need massive scraped human music libraries anymore. Frontier AI could evolve music gen entirely internally, bypassing copyright fights altogether. At that point, human authenticity becomes a niche premium, not the default-and indies who built on open experimentation might get locked out hardest. Thoughts? Are we heading toward a future where AI music is corporate-controlled by default, or do indies have a shot at carving out space with direct fan tools/live vibes?
youtube AI Responsibility 2026-02-01T06:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzEfJxJTkgeGLvJdIp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzLp9NDUc4oQOTp0UZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzlTnh9KL5g88FZkI54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzPiCCVjvjEbToYWE54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgyTzfFhau19TBxBzHF4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxTJa3u8vp1HXAfohJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy1HmaTZq9FlaKm4mN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugycn6Aoavu3nsjmVBZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxZK8weoZbdbXO_5214AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2l7_jmLidA7ECYFR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]