Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How the hell does the father of AI not have a plausible epistemology? Incredible…
ytc_UgwctzRfL…
G
The responses from Dan are not shocking at all, humans have imposed harsh and in…
ytc_UgzjpIqc9…
G
One thing you have to consider about AI is that it can compete with humans on an…
ytc_Ugx_POU9U…
G
No... that's not creepy/chilling/stop doing this/can crush our weak bones with r…
ytc_UgylMYarT…
G
The people saying AI is dangerous are implying that AI is powerful. They are act…
ytc_Ugy4cfEH_…
G
How the fuck does a 3 trillion company with all their cloud and AI advancements …
rdc_ohv1ynd
G
No absolutely not. Why would that be interesting? Oh look there the ai fucked up…
ytr_Ugyv-PZvP…
G
So you'd rather have it full of unfounded pro left views then? Heck no. I'd like…
rdc_n5k6ph7
Comment
I agree with both side's arguments.
Yes, training AI on publicly available information is functionally the same as "training" a human on the same information. "Fair use" isn't going far enough, this is just normal, intended use.
Also yes, an AI re-creating copyrighted works is functionally identical to a human recreating copyrighted works. You can't just copy your source materials from memory and call it your own.
And also also yes, under normal use the AI does not recreate copyrighted works. But also also also yes it's a bit too easy to force it to do that.
youtube
AI Responsibility
2026-04-12T08:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzEhdvFvons7gQE9yx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwbyC7Q9wrIzF2b8ux4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxIwraP_myxPrFM4Zp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyX0ZLvBkxaTZWS5bJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy6Ljl35mtuTf7x_J94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyyNbfTbbgwqi8iaEt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzY0Kw8--3wL67K0up4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyqnykFA9OCzmEr1Cl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxu5-PxGJjiXjmBm0x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzPD31aYAhnrzmSukp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]