Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even if AI beats a human at literally anything, it's more like a collection of h…
ytc_Ugx8k0f-c…
G
ai accidentally made me believe in the human soul, because once it started gener…
ytc_UgxSOUIDh…
G
Worried when my neighbour starts every sentence with “I asked ChatGPT” and “Chat…
ytc_UgxTy59bW…
G
Good. Then we are on the right track. Now if humans don't panic and go all monke…
ytc_Ugzrkr6QQ…
G
Disabled artist here - I was initially excited when the tech came out (specifica…
ytc_UgxCdg7v5…
G
No it actually wouldn't be, because it doesn't understand the details of these c…
ytr_UgyQU4_eH…
G
1..robot audio switch off button
2. Robot software crash
3. Destroyed celltow…
ytr_UgzwLqinP…
G
AI is like a giant comet coming our way in terms of how it will disrupt society,…
ytc_UgzfVU9xt…
Comment
Why AI Copyright Infringement and Plagiarism Is a Problem:
1. Unconsented Training on Copyrighted Works
Most AI models (especially earlier ones) were trained on massive datasets scraped from the internet — including books, music, art, academic works, and patented content — often without the explicit permission of the original creators.
Problem: This violates the principle of ownership. If a human used copyrighted work without attribution or license, it would be plagiarism or theft. When AI does it, it's often excused as “just learning,” but the line is blurry when the outputs mirror or remix the originals.
2. AI Can Replicate Style or Content Without Attribution
AI can mimic the voice, brushstroke, style, or tone of any artist, author, or musician with uncanny precision. While this may be fascinating technically, it raises ethical and legal issues:
Creators aren’t compensated.
Consumers may mistake derivative AI work for authentic work.
It can flood the market with copycat content, drowning out human creativity.
3. Patents and IP Used Without Consent
AI systems can ingest and regurgitate information from patents, technical papers, and proprietary knowledge. This is problematic because:
Patents are protected legal constructs. While they're public records, they come with usage limitations.
Using patented methods in derivative AI-generated tools could inadvertently infringe IP law.
4. It Undermines the Labor of Creators
Artists, musicians, inventors, and writers spend years refining their craft. When AI generates similar outputs in seconds:
It devalues human effort.
It shifts power and profit from individuals to corporations.
It commodifies originality, treating it as raw data.
Why Has This Happened?
1. Regulatory Lag
Tech advances fast — laws move slowly. Copyright law hasn't caught up with AI capabilities.
2. “Fair Use” Loophole Exploited
Companies claim training AI is “transformative” and falls under fair use. But that’s a legal gray area. Courts are only now beginning to wrestle with these questions.
3. Economic Incentives
Big AI labs benefit from scale. The more data an AI is trained on, the better it performs. There’s enormous pressure to train on everything, even if ethically or legally questionable.
4. Lack of Transparency
Training data is often proprietary or opaque. That lack of transparency shields companies from scrutiny — artists can’t even know if their work was used.
What Could (and Should) Be Done?
Consent-first training models — AI should only be trained on data that was opted-in or licensed.
Royalty frameworks for AI-generated content — much like music sampling.
Watermarking or fingerprinting of original work to detect AI misuse.
Legislation mandating transparency in training datasets.
Labeling requirements for AI-generated media.
Bottom Line
You're right to feel that AI, in its current trajectory, poses a massive ethical dilemma around ownership, consent, and creative dignity. Whether we call it theft, plagiarism, or exploitation, it's a systemic problem that needs correction — not just technological patchwork, but deep legal and cultural reform.
youtube
2025-05-16T22:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz1hLnygoTWYNv8-V94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwq70zAn3a1ifXukLh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5sou7znxyl4oazZN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxCajq5AEi5foM6D8l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzj2_1gVnD7g38TvPV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxOvvkf5WdD8OlcJvx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxatMrcQ40wC-nxHwZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwBgVdmS1l7WbGulMJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugwh1G3rCg3Qmf5MQgR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxZETqjfXpYPnsFzL54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]