Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Neil's analogy of science communication being more crucial than science itself i…
ytc_Ugx8xkWAA…
G
Honestly, not an artist, but pretty damn good at drawing, i like AI art, because…
ytc_UgzzQgyl-…
G
i tried talking to ChatGPT about ethics too, brands like AICarma should monitor …
ytc_UgzTFZTGk…
G
Well, that's how we get to the future, by automating things that we don't need h…
ytc_UgxmC1j8C…
G
Of course it's art, it's in the name innit?
Joking aside, you can debate the v…
ytc_UgxRuTL6O…
G
@Chicago48Barely and it adds weird texts. It is just a glorified editor for my e…
ytr_UgwJUuAwb…
G
So if u are a driver in a self driving truck and the self driving truck causes a…
ytc_UgwwIqCTp…
G
It makes me laugh how he speaks about people working on AI as smart. Funniest t…
ytc_Ugx0HMpxU…
Comment
@8:54-.-That's closer to how it actually works then the Photoshop theory, but it's still wrong. The AI model does not contain its training set, that would be inefficient. How it works is that it identifies patterns, and associates that patterns with words and phrases that come up in the descriptions of the images from the training set. It then "knows" to use those patterns when a prompt contains certain words and phrases. That's still a gross oversimplification, but I would say it's about an acceptable level of actressy.
I agree that stealing art to train AI is bad, but the photoshop/plagiarism argument is just bad. I prefer HelloFutureMe's argument.
youtube
Viral AI Reaction
2025-09-02T11:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzTYxcAYyPnd8WKUEN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzCmTu1MSjvX6ZNF8x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwYF0Dty8eFeWR__Dp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyBPSq39zcgdbV2Ptl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxfIs0Sv01S6cd2epl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxNlCF-1N8nAh6IdR14AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwFiF_zxzd3-C_lKIx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgwbmAo8HK9k41T73p94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_Ugy0N3HWc1PyjP9tx894AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxOcrs4Z0kjR2Nu0sd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}]