Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
CNN is garbage and I guarantee you these nice Christians are nasty people with b…
ytc_UgwX9inaZ…
G
this is nonsense. AI will ALWAYS remain a tool. I will never even fully replace…
ytc_UgwsWBTiq…
G
unless we literally give AI to our nuclear access codes, humans aren't going ext…
ytc_Ugw-RvQR-…
G
An important thing to note is that currently AI generated creative works cannot …
ytc_Ugyw5jxOW…
G
Wth?? He said you wont be able to tell robots from us. I dont Want that i want t…
ytc_Ugy1saJWm…
G
I'm a programmer. Don't know much about AI. Have built some AI models. Neural ne…
ytc_UgyskU1tq…
G
heh, not even ai can save them now, of course, not like it ever could.…
ytc_UgyxxnDkB…
G
Universal Basic Capital (UBC), I.E Stocks in AI companies to prevent wealth ineq…
ytc_UgxViGyZx…
Comment
The trouble here is that I fear the courts, like you, will be stuck in a binary choice mode. In reality though, these tools are BOTH things. A human who reads all of the articles, and then writes their own article, will not be remotely able to recreate the original articles. Even people who claim to have perfect recall have limits. These tools can do that however, as well as output things that are similar and not the originals.
Typically, situations like this would push me towards fair use, but a few things here make this situation different for me. First, the purpose of these tools in most cases specifically replaces the original works. You are meant to ask AI to tell you about an event for example, and it will tell the story. It does not by default give you an overview summary and then send you to the source for the full picture. Likewise, it is meant to create images that are the final product. It does not produce a small thumbnail and then send you to an art gallery to see the Monet. These tools CAN do that if you prompt them to do so in just the right way, but that is not the common or promoted use.
Second, every one of the large models has been HEAVILY subsidized by tax dollars. They are public works projects in this sense. Some outright get money, all get tax breaks and backroom government deals. But then, these companies want to keep secrets and keep revenue to themselves like a private company. If you’re going to take public money, you give back to the public. If you don’t, then you’re acting in bad faith and shouldn’t get the benefit of things like fair use. You shouldn’t get to build profit on the backs of the public trust (and yes, there are other offenders here, but I’ll stay on topic). I believe all AI models should have to fully document their training data and be paying license fees for access to it mainly because they are using that training data for financial gain for themselves and giving nothing back to the public. (They provide their product, but I’m talking about “giving back” activities. Providing their tools for free for schools and research is in that realm, but only if it stays free forever and there are still issues with them “stealing” the training data that makes even that sketchy. And, providing these tools for free is not sustainable because they are extremely lopsided in terms of value for cost of running them.)
Third, even if the other issues are addressed, the large companies are not about to operate these tools currently without a HEAVY impact on the environment and the people around the datacenters and they are showing zero signs of caring about either. This behavior is unspeakably evil and needs to be stopped any way we can. I don’t have it in me to become a full on revolutionary, but boy I wish someone would. If they can build these tools with their own money, not destroy environments, and not poison people, then go forth and build your business. But right now at least, that looks to be impossible. So until they figure that out, they should be stopped.
youtube
AI Responsibility
2026-04-11T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxJRqydgFhHF2dmIPx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyDeL9YS4v1cy3uJG54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxi1p5VitqVOqfTCOB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwl1YNIHZuwkkMJ0xN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzsdXv8eqOsgTtYkMh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwWTBX-CGQq5sHMZa54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz7_1p80Me-jgGfdEN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwyz1MGwOc5M5OpnM14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxUURNy9BDvFolefLx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz3AaqfCowLXwb5-Cx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]