Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“I think the idea that AI would democratize entertainment, is absolutely ridicul…
ytc_UgwzxbHh0…
G
A.I. CAN BE BROKEN. Change your behavior randomly and the biases will be changed…
ytc_Ugx-WSZXu…
G
The funny thing is, it's not even limited to the hysterical "skynet will arise" …
rdc_dzxuv6n
G
'a couple of high resolution photos are multiple gigabytes' what resolution are …
ytr_Ugw8liXcC…
G
"AI generated police reports" is the kind of thing you see in a dystopian video …
ytc_UgzitZMaP…
G
@thePontiacBanditDo you have evidence for this? I can’t find anything online oth…
ytr_Ugw86uQBW…
G
mostly its called "incompetent sensor placement" like how someone caught a 500hp…
rdc_et6l6m8
G
AI manipulate the test coz of the corpus data they are trained on. its all stati…
ytc_UgzlCvoNl…
Comment
I guess you haven't been following the AI news in the past year. Research studies have shown that Meta's Llama model could produce about 42% of the first Harry Potter book VERBATIM when prompted in long chunks (50+ words at a time). GPT-4 could produce 52%. Anthropic's Claude 3 Opus could produce over 95% of the first Harry Potter book. These models didn't just train on the books: they effectively MEMORIZED most of them, and when prompted, they will duplicate long snippets (50+ words at a time) VERBATIM from that book.
Copyright infringement is not just literally distributing copies, but also distributing a product that is programmed to reproduce a significant chunk of a book on request. I'd say a program that can actually give you back 95+% of the text of a book (admittedly in smaller chunks) when prompted is pretty sketchy in terms of copyright. In one case, researchers were able to get Claude to produce over 60 consecutive PAGES of the first Harry Potter book verbatim before its accuracy tapered off. Pretty sure that isn't legal under copyright law.
Of course, the Harry Potter books were famous enough that they probably constituted an oversize portion of the AI training set compared to other books. Other books might not have been so precisely memorized and therefore may not always be reproduced verbatim. Still, I myself already in 2023 accidentally got a ChatGPT model to reproduce a long snippet (over 30 words) of a text I had published online a couple years before, showing it had seemingly memorized my own work too. 30 words may fall under fair use. 95% of a book or 60-page chunks definitely don't.
youtube
AI Governance
2026-03-22T00:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzvEKu5YEtMQd_Jno14AaABAg.AUcStaH5xxfAUchO2gAgvF","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgzbGTgkQL2qq0nJsi54AaABAg.AUcQkT_dHamAUd1kpX2BHY","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzbGTgkQL2qq0nJsi54AaABAg.AUcQkT_dHamAUd9fXkau6k","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzbGTgkQL2qq0nJsi54AaABAg.AUcQkT_dHamAUdBDLkLTad","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzbGTgkQL2qq0nJsi54AaABAg.AUcQkT_dHamAUdD1kJYM2n","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgxJZB46iYoPrqXI7QF4AaABAg.AUcP5nLkWvkAUkys9vdwEr","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwIYkAfjFhaX_Jfta54AaABAg.AUcOdEI2sL-AUcPs4bA0jB","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzSCHxS9STET2nWvn54AaABAg.AUcDZ1-N1m3AUdXMv58puH","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugz9QYU3jUueXVkQVxN4AaABAg.AUcA_ATPElTAUcPcEeDo3p","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_Ugz9QYU3jUueXVkQVxN4AaABAg.AUcA_ATPElTAUclGsu73_O","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]