Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
for anybody wondering, all of what's said in the video WILL happen, but it's gon…
ytc_UgyaahxPw…
G
AI, nuclear war and pandemics? We had plagues in the past, but what happened to …
ytc_UgzrhJ1l1…
G
My prediction is, if AI surpasses humans, that humans will become more community…
ytc_UgwezLuYE…
G
This is true, a friend of mines worked there and is still there, the robot got a…
ytc_UgzAd0PyK…
G
all these warnings come from tech-bros with very sound understanding of AI and a…
ytc_Ugw83gJUZ…
G
AI doesn’t teach or tutor people, only proving answer. Teaching and tutoring are…
ytc_UgyCG8soO…
G
Interview a few hundred truck drivers and you will see why they want automated t…
ytc_UgzQ1O9Tl…
G
Everything would have to be free. If you cant work to make money. If AI does rep…
ytc_UgxBnG6VT…
Comment
Not to take away from your main gripes, but you mistakenly and repeatedly describe Stable Diffusion as if it works the same as ChatGPT's image gen -- you cannot just "ask" a diffusion model to change part of the image for you.
Diffusion models rely on a single prompt used to navigate the latent space of their training data. You can ask a language model to change this prompt for you to make some specific change, but this will inherently affect your entire image by moving you around within this latent space. Changing the initial noise (= image) will do the same, which is why all of Shad's iterations look so different from one another, artistic vision be damned. (Furthermore, the fact that "his" image exists as-is within the latent space, makes his claim that the model could never have done it on its own somewhat absurd, but I digress.)
Early attempts at true instructable image generators, capable of making changes while leaving the rest of the image as-is, used multi-modal language models to select inpainting regions for targeted diffusion, but I'm pretty sure this is not built into Stable Diffusion. The workflow you are describing has only really been possible since the release of OpenAI's newest image generator, which uses an auto-regressive model built directly into their multi-modal language models rather than diffusion.
That is to say, Shad's Photoshop approach is slightly more clever than you are giving it credit for, as insufferable as he sounds about the process. Though I wonder why he isn't combining this with inpainting regions to keep some semblance of artistic vision...
youtube
Viral AI Reaction
2025-08-13T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwd3dX4W_w2eRlsHN14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPxwbHFVB9bVkQXAt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDRFq-oWL1ElAoENl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzdGqvX_P1XqX6B3Oh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcoNjLDFTcVo5u3jx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwjq6h8a4QZldUzIrZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"disapproval"},
{"id":"ytc_UgxsB9Iarxi2OFXEU5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwgeDvztM0GzWSIs-N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwC-D6jUPEjoVUeTTR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyhOaNv4iRlJVWKAxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]