Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was having chatgpt make me a stencil to paint my airsoft gun with when it got …
ytc_Ugy5NulK1…
G
Yes, as a tool, maybe for ideas, AI can be a good thing, but this person sells t…
ytr_UgxH-2fVu…
G
Title should say WAYMO is DOOMED... Tesla's Full Self-Driving is far superior an…
ytc_UgwkvVo_r…
G
We appreciate your humor! Sophia's demeanor might seem a bit serious, but she's …
ytr_UgxFGwNXp…
G
I'm a small business counselor, I've seen a huge increase in misinformation and …
ytc_UgxjTTdaj…
G
So, wait a second...
According to the article, the flaw is that it was using su…
rdc_e7ja25w
G
Smoking gun ... AI is still not good enough to replace an actual human at any of…
ytc_Ugyc4oQrg…
G
The most biggest problem creates by human . Why human make AI 👿👿???
So human cr…
ytc_UgxKkwG8k…
Comment
I do information security for a living. I'd like to point out a couple things if people are going to use poisoning so that it's more effective.
1) By showing people art that you are going to poison here on YouTube you create an opportunity for them to pause the video before you apply the poisoning, screen shot it, and make the non-poisoned image publicly available/use it to train a local AI.
If you are an artist, please do image poisoning that you don't share the original image anywhere (like in a video).
2) You can use poisoning to defeat an AI art scanner. "Make" a piece of AI art, poison it, and it will get passed a scanner.
Is this bad or good? If you are NOT an artist, go to an AI art generator, ask for a hand study. When a "good" one is produced, apply nightshade, and release the poisoned image into the wild. An AI won't be able to detect it was created by an AI meaning that it will use it as subject material... You can literally use AI against itself.
youtube
Viral AI Reaction
2025-05-29T12:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx63hY67qVMSyg7h0N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzxr_FcoBtVFrDLpkt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBtLCffPoITWCv2SF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwRGNd6AsS73kurNBZ4AaABAg","responsibility":"none","reasoning":"resignation","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLZwiqy6CPrclEJ1d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgybPXsUxi-qeNbWtp94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw_kWAd5hc-cgXPmsJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNB7sPCCdEFWlWFuJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxBq2S808CzI2eKXVF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxKnFehPXkFZpM5IR94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]