Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I just realized the 'trick' to defeating image poisoning, like Nightshade, was having the AI slightly blur the picture, and sharpen it again, and only then 'train' on it. These AI poisoners used to work on Video too. YT *WANTS* to train AI on all the shorts, and the upscaling is 'preparing' the media for that.
youtube 2025-10-20T17:2… ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx1OjDP7Ms5a9Ybm1V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxR2gODDngvScsm8Xl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugz_qtqCNMKG_Wo88ZR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyRpaU3kPfoTcWDFMt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxUlmTYrmww5SYQnKd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]