Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You’re still going to have large scale companies that benefit a lot from the mar…
ytr_UgyV0YKXI…
G
Art is about an individual's depiction of ideas. Unfortunately the artist has be…
ytc_UgwHsh1fW…
G
I think UBI will come BUT there will be other things that follow that will offse…
ytc_UgyQNmkKM…
G
Complete BS, this is not AI it's programming by random numbers, man is out of to…
ytc_UgzOsLWV6…
G
As a multimedia visual artist for over 15 years, I don't mind AI art. I think it…
ytc_Ugx-q6SfI…
G
just needs an AGI going rogue, hacking into them, completely overwriting any saf…
ytr_Ugwhm5ZxD…
G
I have a family member that has supposedly developed a code language with it and…
rdc_m2fs4br
G
I was prepared to comment that ChatGPT isn’t responsible for the death of someon…
ytc_UgzQS9AzM…
Comment
If the AI is reproducing stuff verbatim, it's over trained and thus poor quality.
As an AI research and someone highly educated in AI, I can tell you that it's not storing what it's trained on. It doesn't have anywhere near enough memory to do that. _But,_ if you train an AI _too much_ on a specific thing, it can sort of memorize it. It can't memorize very _much_ stuff, due to that memory constraint, but it can memorize a small amount. Odds are NYT had to try a ton of different articles, before it got one that was fully memorized. But when the AI memorizes something like that, it creates a problem, and not _just_ a legal problem. When an AI is overtrained to that degree, the AI will favor that specific training material more than others in its output. So, say I train an AI on a bunch of NYT articles, and a few get significantly overtrained. Now the AI is more likely to quote from those articles (without attribution), including in unrelated things. It's also likely to have its own biases more strongly favor the biases in those articles. So say the article is promoting some political agenda or is spun to favor a particular view. Now the AI will be biased on favor of that agenda or view. This can create legal problems for the company that made the AI, if it doesn't disclose those biases, and odds are high that _it doesn't even know those biases exist,_ so it can't disclose them.
What's really sad is that it's _not hard_ to avoid overtraining, but most AI researchers _don't understand AI well enough_ to understand how to do this or even to understand that it is necessary. And it also doesn't help that they don't understand copyright law either. They _should_ have part of their training loop setup _specifically_ to make sure that the AI doesn't memorize anything verbatim like that, but they want to get as close to that 100% training accuracy as possible. What they fail to understand is that any training input that does hit that 100% is now plagiarism and copyright infringement waiting to happen.
AI isn't just copying and storing everything it sees. It really is learning in a similar way to how humans learn, by experience and feedback loops. _But,_ just like a human could memorize copyrighted content and reproduce it exactly, _and be guilty of copyright infringement for doing so,_ poorly trained AI can as well. Copying an art style or a writing style is fine for humans and thus should also be fine for AI, but AI doesn't understand human rules and laws, so the humans making it _need_ to take much greater care _and_ they need to be held responsible when they fail to do that.
I just hope this doesn't create an overbearing legal precedent. If OpenAI negligently trained an AI to engage in illegal behavior, they _should be_ held accountable for that, but people who are operating entirely within the law shouldn't be punished for OpenAI's negligence.
youtube
AI Responsibility
2026-04-26T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxSNIh2Pj_oqIdk3uJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzrDgAKt8rzzVhND394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx8MPGxfFa7iplWCkt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxsv74lwKydZnfzf-t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzBn_EJrZiVt8udJD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxc5TuzhJLBxXlW3Tx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzcu_uJS6oIH73QFiV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyQhm6LqC0_MygNNCl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"approval"},
{"id":"ytc_Ugyg7ZNZkMCV5p7dfSd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx10acZXlMZAOe_KWJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]