Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is exactly how I work with ChatGPT. Noone told me to do it this way, it jus…
ytc_UgxAzTb_J…
G
@poopiedoopie3007 What you have stated here indicates a gross (and willful?) mis…
ytr_Ugxf7KcB3…
G
Y'all think this is fun or funny but on my opinion I think we should get rid of …
ytc_UgxPXcdDG…
G
Oh the fucking irony. Ad popped up, because this is youtube, of course there's 5…
ytc_UgzHasMpt…
G
the fact people actually believe ai art is better than actually putting effort m…
ytc_UgzD023Bh…
G
I'm a sports teacher, trainer and therapist... ai will never be able to take my …
ytc_UgyzHgxJP…
G
The ultimate joke. "Can A Tesla Drive Itself From Sydney To Melbourne? Full Sel…
ytc_UgwdapuSd…
G
You make a great point! Wisdom indeed often comes from experience and the willin…
ytr_UgwZsQ-J3…
Comment
@dede6giu
AI Can Learn From Process, Not Just Final Output
Your assumption is AI only cares about the final image and ignores the steps (sketch, lineart, etc.). That’s not entirely true. Modern generative AI, like diffusion models or GANs (Generative Adversarial Networks), can absolutely benefit from sequential data if trained on it. A 22-minute timelapse isn’t just a pretty video—it’s a goldmine of temporal data showing how an artist moves from sketch to final piece. If you feed this into a model with a framework like a recurrent neural network (RNN) or a transformer designed for sequential learning, it could learn how to replicate the process, not just the end result. Research into video prediction and motion modeling (e.g., papers from 2023 on temporal consistency in generative AI) shows that AI can infer patterns over time—like brushstroke order or layering techniques—when given enough data. So, saying "AI doesn’t go through each step" is more about how it’s currently used, not what it’s capable of (at most. If you know how to use comfyUI and other complex AI frameworks, its not a rocket science)
Style Isn’t Just the Final Art—It’s the Decisions Along the Way
An AI trained on a timelapse could pick up on an artist’s unique habits: how they sketch loosely or tightly, how they apply lineart pressure, or how they layer colors. Techniques like CLIP (Contrastive Language-Image Pretraining) combined with diffusion models (think Stable Diffusion tweaks from 2024) can already encode these nuances if you give them structured input. A timelapse is structured input—it’s literally a step-by-step breakdown. So, the "combination of sketch, lineart, and other processes" does help if the AI is set up to analyze it, which any competent researcher or hacker could do
Intellectual being arguemnt
AI doesn’t need to mimic human cognition—it excels at pattern recognition and interpolation. A timelapse gives it raw data to crunch: pixel changes, timing, transitions. Techniques like optical flow analysis (used in video AI since at least 2020) could break down the drawing process into vectors and stages, letting the AI simulate it later. It’s not about "knowing what to use them for" in a human sense—it’s about having enough data to statistically approximate the outcome. A human might need intent; an AI just needs compute power and a big enough dataset.
AI doesn’t need to ‘go through each step’ like a human—it can still learn the hell out of a timelapse. Feed those 22 minutes into a diffusion model with temporal analysis, and it’ll spit out not just the final art but the whole process, sketch to finish. Style’s in the decisions, not just the end result, and a video like that’s a roadmap of decisions. Data poisoning? Pfft, this is clean data—perfect for training. AI doesn’t need to think like an ‘intelligent being’ to crack it; it just needs patterns, and that timelapse is dripping with them. The channel author’s not just failing at fighting AI—they’re practically gift-wrapping their style for it
youtube
Viral AI Reaction
2025-03-31T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxPrvhkSmzQK8ggVyt4AaABAg.AGKgufkNt98AGKhvKnUh2U","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxPrvhkSmzQK8ggVyt4AaABAg.AGKgufkNt98AGKjSieMTCE","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwrpuB6tKWaqkDa8vp4AaABAg.AGKgeVSUiVPAGKlIngvFdV","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzWLqacAy1UNhcWhnh4AaABAg.AGKgWD0kaqSAGKjshNaHsE","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgxVWC1rAklQB_Ocja94AaABAg.AGKgGkJPOFpAGKmjwmfvpb","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxVWC1rAklQB_Ocja94AaABAg.AGKgGkJPOFpAGKpltSU2-N","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugzk5zq0sRqw3Tjm7ah4AaABAg.AGKg--msZSXAGKgrrPLQ__","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugzk5zq0sRqw3Tjm7ah4AaABAg.AGKg--msZSXAGKiY9qjFsz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwgPGz7zySQMLM_zPR4AaABAg.AGKfl-po05RAGKysvccMxV","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgwgPGz7zySQMLM_zPR4AaABAg.AGKfl-po05RAGL3NHCFGB2","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]