Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'd agree there are risks to AI, but there's also a risk to looking at things in…
ytr_UgzruFdGK…
G
I have fibromyalgia, hEDS, and a good few mental health issues. AI was something…
ytc_UgzhgETsB…
G
Top secret agencies reached the current stage of AI many years ago . So what's t…
ytc_UgxRFqh0i…
G
Work needs to be automated on all ends. Unless you have a severe passion for ent…
ytc_UgxJ8O4yT…
G
Thats a new thing that started in the 1980's. It was to maintain breeding, back …
rdc_cdlzp7a
G
AI should be banned across the world completely. Also people have been pretty la…
ytc_Ugzl4j1EK…
G
I think AI is good for a few things, but who the f*ck thought it was a good idea…
ytc_UgyY5eQGm…
G
People who think AI wil magically take away every blue collar job are morons.
We…
ytr_UgxPlwp3f…
Comment
There are no rules, only tools. Tracing is fine, paintover is fine, photobashing is fine, using generative AI is fine. All these are valid techniques, though they might reduce the amount of technical skill required for a given result, they also place constraints on what kind of results are achievable. Not everyone needs to learn to draw by hand. That's a part of your rhetoric that is just dead wrong.
Where I do think you're in the right though is with the idea of people making money from generative models where artwork was used in the training data without permission. That cannot stand, and if current copyright and plagiarism law isn't up to the task new laws will have to be created to deal with it.
youtube
Viral AI Reaction
2025-10-10T19:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy08CI-Zf8KhQEs6x14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxxAzmk9ddUxzaikKx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugw0LFxsx7J3wR-o97h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxNfAeq_JNJcBSyA714AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwDtyONclUN7HrIrr14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzbjQf8K7Y6TcwVeGV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgysG8SXUrbAinJF5ul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzGqfR06bF9PPSewfF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzJeQYEFy2-ShesBQx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzs6tf-eEzTNduO_d54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}
]