Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Been coding for 15 years. Using AI like 90% of the time now. You absolutely need…
ytc_Ugx6KluDw…
G
Well, I guess that’s how they try making their coping with the fact that their h…
ytc_Ugzq0Vz4a…
G
That's what I'm wondering. ChatGPT does agree on a lot of things but usually wit…
ytr_Ugx3N2jaW…
G
If it's going to kill us.. Please AI.. Make it fast. Lol. There is no point of s…
ytr_UgwbS69Ws…
G
Even tho THIS is ALL cap and just for entertainment, but how THEY KEEP pushing t…
ytc_Ugw7nrvsr…
G
Seeing this a.i. doing the flims and cast look super weird and lame. I rather se…
ytc_UgyhbTXBQ…
G
8:10 ChatGPT, and most LLMs, will halt everything and give you resources when yo…
ytc_Ugx6HMXnM…
G
One thing AI cannot do is get emotional seeing a sunset, we are the only species…
ytc_UgzWKsBa-…
Comment
@ the problem with your claim is that Ai generated content isn’t stolen from anyone. If you’re interested in learning how ai learns to draw, it’s very compelling to see the similarities it has with human learning, but with a more mathematical twist.
But to put it simply, AI uses references to draw. Once it knows how to draw, it never has to look at the sample art again. It isn’t stored in any way. The art is simply scanned and then the machine knows how to produce a similar looking image. It understand the shapes and colors, not just how to copy and paste.
“Poisoning” art, if it actually did work, would interfere with possibly a year’s worth of Ai programmers’ work. For comparison, it would be like posting some code online that secretly holds malware that will break someone’s program. Even if you don’t like Ai art users, being a nuisance doesn’t really help anyone.
You’re free to dislike AI art. Personally I prefer traditional methods too. But there are places in the world for AI art to shine. It’s not black and white obviously, as there are useful applications for it to save artists time.
youtube
Viral AI Reaction
2025-01-03T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzOx4diKC39TY2fqNR4AaABAg.ACq5T_IV695ACzQbGlZC6x","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgwyMEsOlxtPzpl0mtZ4AaABAg.ACpz_8vsf8zAE5eB2d-Iim","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwyMEsOlxtPzpl0mtZ4AaABAg.ACpz_8vsf8zAE74qs4oxAX","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgxjoR4o9LtruzIaKnx4AaABAg.ACpd2hqLWVkAD1z7tHHCyz","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxjoR4o9LtruzIaKnx4AaABAg.ACpd2hqLWVkAD4Pfgp7zW8","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxjoR4o9LtruzIaKnx4AaABAg.ACpd2hqLWVkADHorwa4ypR","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugydco_-LkUnd_rEkn14AaABAg.ACooyGG91xLAF17QbZIJxu","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugxy4snpDtOeeqLynWd4AaABAg.ACnrjoVA8UdACoDIY9MM-x","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugxy4snpDtOeeqLynWd4AaABAg.ACnrjoVA8UdACpz2qt6qB8","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_Ugxy4snpDtOeeqLynWd4AaABAg.ACnrjoVA8UdACqWaBDF1_j","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}
]