Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
OMG!! I am from non-science background. Yet this video made me feel very comfort…
ytc_Ugx0yOr7S…
G
Yup. Now I argue with AI and tell it how it fucked up my codebase instead of arg…
rdc_o8a48tw
G
Think about this: some operations do use a variance of A.I. Let's say you are an…
ytc_Ugzfx8N4B…
G
Reporter: "AI could potentially eliminate humanity."
Also reporter: "If I would…
ytc_Ugzjo6Fjp…
G
https://en.m.wikipedia.org/wiki/Green_New_Deal
Seems like it's a term that's us…
rdc_fnx6cmu
G
I like Eliezer's comments because I think AI could create smaller versions of se…
ytc_Ugy15UeB-…
G
Love is a intellectual feeling making love is a physical, organic feeling. Human…
ytr_Ugz5GYVi6…
G
i really wanna watch your ai videos but the thing is i get ACTUALLY scared by ai…
ytc_UgyPHK5HB…
Comment
LightShed can detect, reverse-engineer and remove the distortions, effectively stripping away the protections and rendering the images usable again for generative AI model training.
LightShed works through a three-step process. It first identifies whether an image has been altered with known poisoning techniques. In the second step, reverse engineering takes place as it learns the characteristics of the perturbations using publicly available poisoned examples. Finally, it eliminates the “poison” to restore the image to its original, unprotected form.
In experimental evaluations, LightShed successfully detected NightShade-protected images with 99.98% accuracy and effectively removed the embedded protections from those images.
youtube
Viral AI Reaction
2025-08-02T07:5…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyFP1UiG4OjY4ZnCjV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyJUAiwA3NOuE3awHJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSsaECAmuysYax0f94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxQgjow_UYcSINbooh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJqt4EH04R_PNPzt94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyvm4quuX5nid4JUop4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzmoclt24EstzchvUZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwA1T9FY8RoPZcefkt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzCaBsLWusXDb_ccu54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzhQIwwfBUQv7CB9G94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]