Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Good point. I mean at least humans have ways to artificially alter their mental …
ytr_UgwbERK5e…
G
It's all about the vision. Who is the creative, the artist or the computer? My r…
ytc_Ugynq_jb4…
G
I wonder if AI came up with an advanced version of the Cloward Piven Strategy. W…
ytc_Ugxg574sW…
G
Could a global initiative that halts the sale and production of compute (GPU's) …
ytc_UgzuX3MZd…
G
If you can let AI make an invoicing software. Then why would the user using the …
ytc_UgxWa_n1e…
G
Hopefully nobody actually believes Blake Lemoine's claim. Everything AI is outpu…
ytc_UgyH-aS7f…
G
Thinking you can make AI safe is stupid foolish diabolical naive delusional and …
ytc_UgwqaRhmb…
G
The AI's power comes from its intense specialization and the vast, static snapsh…
ytc_UgxQjJ0TF…
Comment
OpenAI’s March 2025 proposal, submitted as part of the Trump administration’s AI Action Plan, argues that AI training on copyrighted material (e.g., books, novels) should be considered “fair use” under U.S. copyright law, eliminating the need for explicit author permission or compensation. If legalized—via executive action (e.g., an order by 2026) or Congressional amendment to the Copyright Act of 1976, authors would lose control over whether their works are used for AI training. Unlike the EU, where rights holders can opt out (per OpenAI’s critique in its proposal), a U.S. fair use policy would not require consent or royalties, AI companies could legally scrape and use copyrighted books for training without seeking author approval, as long as the use is deemed “transformative” (i.e., the AI generates new outputs rather than reproducing the original text verbatim).
youtube
2025-07-05T22:1…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyfU_lUAXZ3WA7yXYx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyuSjEWbOSl-E3GCZ54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKAYggBCKivx6Itap4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx9Vwl-b9A-tg639lJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwZkeHX0dg97Dy5x3l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwu80lwTcCGuiaOBH14AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgykUukBwkcqXNMWhg14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyrb2fZOj3DsD_Akwp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyM1Xgf8gTgOSJzLwN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDXzHKxQ1h555QCfh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]