Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will absolutely take all of those jobs and soon. This is an irrefutable FACT.…
ytc_UgwOwdeL8…
G
I've done hundreds of hours of counseling at the Veterans Administration, but no…
ytc_UgznxlAs5…
G
Don't need to be an AI to see that humans have become a global threat to life it…
ytc_Ugz3U0b06…
G
AI companies simply could volunteer real artists who would then create like 10 -…
ytc_UgyISenWd…
G
I got flagged as part of this 0.15% because I told ChatGPT that it should go kil…
ytc_UgywCOvJh…
G
You know what’s scary, the more we dive into A.I sentience, the more we get conf…
ytc_UgxbgNKJM…
G
1:15:25 this is the same as all of the censorship workers contracted by YT, in c…
ytc_UgzREcE8h…
G
A key differentiator is that art is defined by the creation of something to inst…
ytc_UgzDXqeIm…
Comment
@freerangesimp If you made and distributed a photocopier that came packaged with 60 pages of a Harry Potter book in its memory and able to be printed on demand, you're claiming that would not be copyright infringement?
"This is no different (in principle) from users of peer to peer networks sharing content. The network enables a broad activity"
Yes it is different, as the AI companies specifically LOADED the content on this "network" (in your analogy). Your analogy would only work if the AI companies only created the training algorithm but users provided the data and did the training. Users did not. Anthropic (in this case) chose and specifically put that content into the AI model. Like storing long text blocks in the memory of a photocopier that was then offered for sale.
To be clear, I personally believe copyright law is entirely broken and pretty much all stuff more than 14 years old (including the Harry Potter books) should default to the public domain. So I don't personally give a crap that these models memorized large portions of their training set and can reproduce it on demand. But under current copyright law, it seems these companies have trained engines that can spit out large portions of copyrighted works verbatim. That's not fair use under any reasonable definition.
youtube
AI Governance
2026-03-22T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzvEKu5YEtMQd_Jno14AaABAg.AUcStaH5xxfAUchO2gAgvF","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgzbGTgkQL2qq0nJsi54AaABAg.AUcQkT_dHamAUd1kpX2BHY","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzbGTgkQL2qq0nJsi54AaABAg.AUcQkT_dHamAUd9fXkau6k","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzbGTgkQL2qq0nJsi54AaABAg.AUcQkT_dHamAUdBDLkLTad","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzbGTgkQL2qq0nJsi54AaABAg.AUcQkT_dHamAUdD1kJYM2n","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgxJZB46iYoPrqXI7QF4AaABAg.AUcP5nLkWvkAUkys9vdwEr","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwIYkAfjFhaX_Jfta54AaABAg.AUcOdEI2sL-AUcPs4bA0jB","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzSCHxS9STET2nWvn54AaABAg.AUcDZ1-N1m3AUdXMv58puH","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugz9QYU3jUueXVkQVxN4AaABAg.AUcA_ATPElTAUcPcEeDo3p","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_Ugz9QYU3jUueXVkQVxN4AaABAg.AUcA_ATPElTAUclGsu73_O","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]