Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is nonsense… I am an artist. And how do we artists learn to draw? Yes, we s…
ytc_UgxjX1Fts…
G
We need Socialism, so we can use AI for the good things, for developing a better…
ytc_Ugx_vqgCt…
G
Nice try at hyping up this crap. "AI" cannot be trained to understand and self-a…
ytr_UgxVcajFK…
G
I would definitely go with this approach. Especially with how I see AI and human…
ytr_UgztJUncP…
G
man, he had a perfect opportunity to pivot into how AI is eternally at that "alm…
ytc_Ugx55w_6V…
G
Yes, AI can do medicine in a variety of ways. Here are some examples:
Diagnosis…
ytr_UgzKcbjhY…
G
There are more chance automated systems are used to kill civilians domestically …
rdc_ohujjr2
G
Oh my god, where will all these end..? How can I know, the girl in the street is…
ytc_Ugwc4I-vj…
Comment
Humans can neither store verbatim copies of byte sequences nor reproduce them as AI engines do.
If I were to write an AI based search engine that allowed users to locate copyrighted works which are offered for sale, then this would qualify for a "fair use" exception under the law.
However, if I took the same engine and modified it to produce "original" works which incorporated by sequences which were stored in a database (which is what a large language model, or LLM, actually is) for use by a generative AI algorithm, then the "fair use" claim no longer holds up under close examination.
The central issue surrounding "fair use" is intent. Companies which have trained models for the purpose of having AI engines generate output based on the content have every intent of using this data for their own financial gain at the expense of the copyright owners.
youtube
AI Responsibility
2025-04-03T22:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugwhd1s0rPPMd35MAl94AaABAg.AHbHhv7wMlkASt7TbFY5FS","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzSDg_HjrUHXmEPm6V4AaABAg.AGQLEEvcdmlAHeMEoRLdAd","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugw4t7KrwgJ64STlsWp4AaABAg.AFh1jfpaAE0AGTawvZbsgg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_Ugw4t7KrwgJ64STlsWp4AaABAg.AFh1jfpaAE0AHeINSOvq9o","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyIMwVT69itGj3pW0Z4AaABAg.AEjTRmMD2N5AGTccxlvrg3","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytr_UgyIMwVT69itGj3pW0Z4AaABAg.AEjTRmMD2N5AGod5KC1vMe","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwysebPJrPVIAAn5Vh4AaABAg.AEiYP1MwUvRAGTc3L8yKU0","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytr_UgyUJ1o98exLqWk4qCl4AaABAg.AEi3shogDsfAGTXSk5LrTR","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyB0f5t2tIXzAgCLrJ4AaABAg.AEi1M8bxBTTAEyPcGMxVGc","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyB0f5t2tIXzAgCLrJ4AaABAg.AEi1M8bxBTTAF00mY9qxEG","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}
]